Skip to content

aunraza19/LungsCareAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

11 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ₯ LungsCareAI - AI-Powered Lung Analysis System

Python React FastAPI License

LungsCareAI is a comprehensive, full-stack AI-powered medical diagnostic system for lung disease detection and analysis. It combines advanced deep learning models with RAG (Retrieval-Augmented Generation) technology to provide intelligent medical insights through both lung audio analysis and chest X-ray classification.

LungsCareAI Demo

✨ Features

🎡 Audio Analysis

  • Binary Classification: Normal vs Abnormal lung sounds
  • XAI (Explainable AI): Gradient saliency and attention rollout visualizations
  • Model: Fine-tuned Audio Spectrogram Transformer (AST)
  • Supported Formats: WAV, MP3, M4A, FLAC

πŸ₯ Chest X-ray Analysis

  • Multi-class Classification: 10 lung conditions
    • Control (Normal)
    • COVID-19
    • Pleural Effusion
    • Lung Opacity
    • Mass
    • Nodule
    • Pneumonia
    • Pneumothorax
    • Pulmonary Fibrosis
    • Tuberculosis
  • Advanced Preprocessing: CLAHE + GFB colormap enhancement
  • Visualization: 4-panel analysis with confidence scores
  • Supported Formats: JPG, PNG, BMP, TIFF

πŸ€– RAG-Powered Medical Assistant

  • 17,000+ Medical Q&A knowledge base
  • Context-Aware Chat: Patient-specific information retrieval
  • Multi-language: English and Urdu support
  • LLM: Google Gemini 2.0 Flash

πŸ“„ Professional Reports

  • Automated PDF Generation: Medical-grade reports
  • AI-Generated Summaries: Concise clinical insights
  • Integrated Visualizations: XAI heatmaps and analysis
  • Patient History Tracking: Complete analysis records

πŸ‘₯ Patient Management

  • Registration System: Demographics and patient tracking
  • Report History: All analyses linked to patient records
  • Multi-patient Support: Manage multiple patients

πŸš€ Quick Start

Prerequisites

  • Python 3.8+
  • Node.js 16+
  • Docker (for Qdrant vector database)
  • Google Gemini API Key (Get one here)

Installation

  1. Clone the repository

    git clone https://github.com/aunraza19/LungsCareAI.git
    cd LungsCareAI
  2. Download ML Models

    Due to GitHub's file size limitations, download the models separately:

    • Audio Model (final_model_ast (1).pt - ~350MB)
    • X-ray Model (final_model.keras - ~100MB)

    Download Instructions:

    • See MODELS.md for detailed download options
    • Upload to Google Drive, Hugging Face, or use Git LFS
    • Place both model files in the project root directory
  3. Set up environment variables

    cp .env.example .env
    # Edit .env and add your GEMINI_API_KEY
  4. Backend Setup

    cd backend
    python -m venv venv
    
    # Windows
    venv\Scripts\activate
    
    # Linux/Mac
    source venv/bin/activate
    
    pip install -r requirements.txt
  5. Frontend Setup

    cd frontend
    npm install
  6. Start Qdrant Vector Database

    Windows:

    .\start_qdrant.ps1

    Linux/Mac:

    docker run -d -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant

    πŸ“– For detailed Qdrant setup instructions, see QDRANT_SETUP.md

Running the Application

Option 1: Using automated scripts

Windows:

run_webapp_demo.bat

Linux/Mac:

chmod +x run_webapp_demo.sh
./run_webapp_demo.sh

Option 2: Manual start

Terminal 1 - Backend:

cd backend
source venv/bin/activate  # or venv\Scripts\activate on Windows
uvicorn app:app --reload --host 0.0.0.0 --port 8000

Terminal 2 - Frontend:

cd frontend
npm run dev

Terminal 3 - Qdrant:

docker run -p 6333:6333 qdrant/qdrant

Access the application:


πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        Frontend (React)                      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚   Home   β”‚ Register β”‚  Audio   β”‚  X-ray   β”‚   Chat   β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚ REST API
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Backend (FastAPI)                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   Audio Analysis    β”‚   X-ray Analysis   β”‚   RAG Chat  β”‚ β”‚
β”‚  β”‚   (AST Model)      β”‚   (Custom CNN)     β”‚  (Gemini)   β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚           Patient Manager β”‚ Report Generator           β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚              β”‚              β”‚
          β–Ό              β–Ό              β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚  Models  β”‚  β”‚  Qdrant  β”‚  β”‚  Files   β”‚
    β”‚  (.pt,   β”‚  β”‚ Vector   β”‚  β”‚ (JSON,   β”‚
    β”‚ .keras)  β”‚  β”‚   DB     β”‚  β”‚  PDF)    β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“Š Tech Stack

Backend

  • Framework: FastAPI
  • ML: PyTorch, TensorFlow/Keras, Transformers
  • RAG: LangChain, Qdrant, SentenceTransformers
  • LLM: Google Gemini 2.0 Flash
  • Image Processing: OpenCV, Pillow
  • Audio Processing: torchaudio, librosa
  • Reports: ReportLab

Frontend

  • Framework: React 18 + TypeScript
  • Build Tool: Vite
  • UI: Material-UI (MUI) v5
  • State: React Query
  • Routing: React Router v6
  • File Upload: React Dropzone

Database

  • Vector DB: Qdrant
  • Patient Data: JSON (upgradable to PostgreSQL)

πŸ“ Project Structure

LungsCareAI/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app.py                 # FastAPI server
β”‚   β”œβ”€β”€ requirements.txt       # Python dependencies
β”‚   β”œβ”€β”€ logo.png              # Report logo
β”‚   └── patient_records.json  # Patient database
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ App.tsx
β”‚   β”‚   β”œβ”€β”€ pages/            # React pages
β”‚   β”‚   └── components/       # React components
β”‚   β”œβ”€β”€ package.json
β”‚   └── vite.config.ts
β”œβ”€β”€ inf.py                     # Audio analysis module
β”œβ”€β”€ xray_tools.py              # X-ray analysis module
β”œβ”€β”€ rag.py                     # RAG system & reports
β”œβ”€β”€ final_model_ast (1).pt     # Audio model (download separately)
β”œβ”€β”€ final_model.keras          # X-ray model (download separately)
β”œβ”€β”€ inv_class_indices.json     # X-ray class labels
β”œβ”€β”€ medical_meadow_*.json      # Medical knowledge base
β”œβ”€β”€ Green Fire Blue (1).lut    # X-ray colormap
β”œβ”€β”€ .env.example               # Environment template
β”œβ”€β”€ .gitignore
└── README.md

πŸ”§ Configuration

Environment Variables (.env)

GEMINI_API_KEY=your_gemini_api_key_here

Backend Configuration

  • Host: 0.0.0.0
  • Port: 8000
  • CORS: Enabled for localhost:3000

Frontend Configuration

  • Port: 3000
  • Proxy: API requests proxied to localhost:8000

πŸ“š API Documentation

Once the backend is running, visit:

Key Endpoints

Patient Management

  • POST /api/patients/register - Register new patient
  • GET /api/patients - List all patients

Audio Analysis

  • POST /api/analyze/audio/basic
  • POST /api/analyze/audio/gradient
  • POST /api/analyze/audio/attention

X-ray Analysis

  • POST /api/analyze/xray/basic
  • POST /api/analyze/xray/visualization

AI Chat

  • POST /api/chat

🎯 Usage

1. Register a Patient

Navigate to Patient Registration and fill in patient details.

2. Upload Medical Data

  • Go to Audio Analysis or X-ray Analysis
  • Select the patient from dropdown
  • Upload audio file or X-ray image
  • Choose analysis type

3. View Results

  • Classification with confidence score
  • Detailed medical analysis (RAG-powered)
  • Download PDF report
  • View XAI visualizations

4. Chat with AI

  • Select patient (optional)
  • Choose language
  • Ask medical questions
  • Get context-aware responses

5. Access Reports

  • View all patient reports
  • Download PDFs
  • Access XAI visualizations

πŸ§ͺ Testing

Sample Data

Sample medical files are included in the examples/ folder:

  • Audio: examples/H005_R4.wav
  • X-ray: examples/covid00186.jpg, examples/fib.jpeg, examples/pn1.jpeg, examples/xray.jpeg

Test Workflow

  1. Register a test patient
  2. Analyze sample audio file
  3. Analyze sample X-ray image
  4. Check generated reports
  5. Test chatbot with medical questions

🚨 Important Notes

⚠️ Medical Disclaimer

This system is for research and educational purposes only. It is NOT FDA approved and should NOT be used for actual medical diagnosis without oversight from qualified healthcare professionals.

πŸ”’ Security Considerations

  • Production: Implement authentication (JWT)
  • Data: Encrypt sensitive patient information
  • Database: Use PostgreSQL instead of JSON
  • Compliance: Follow HIPAA/medical data regulations
  • API Keys: Never commit .env file to version control

πŸ“¦ Model Files

The ML models are not included in the repository due to size constraints. Download them separately:

  • Audio Model: ~350MB
  • X-ray Model: ~100MB

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Development Guidelines

  • Follow PEP 8 for Python code
  • Use TypeScript strict mode
  • Add comments for complex logic
  • Write unit tests for new features
  • Update documentation

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ™ Acknowledgments

  • MIT AST Model for audio analysis
  • Medical Meadow Dataset for knowledge base
  • Google Gemini for LLM capabilities
  • Qdrant for vector database
  • LangChain for RAG framework
  • Material-UI for React components

πŸ“§ Contact

Project Maintainer: @aunraza19


πŸ—ΊοΈ Roadmap

  • Mobile app (React Native)
  • Real-time audio streaming analysis
  • DICOM support
  • Multi-user authentication
  • Advanced analytics dashboard
  • Treatment tracking
  • Appointment scheduling
  • Multi-language expansion
  • Model fine-tuning interface
  • Integration with EHR systems

πŸ“Š Performance

  • Audio Analysis: ~3-5 seconds
  • X-ray Analysis: ~2-4 seconds
  • Report Generation: ~1-2 seconds
  • Chat Response: ~2-3 seconds

Optimizations

  • Model caching for fast inference
  • GPU acceleration support
  • HNSW vector indexing
  • Async processing

πŸ› Known Issues

  • Large model files require separate download
  • Qdrant must be running before backend
  • First analysis takes longer (model loading)
  • Limited to English/Urdu languages

See Issues for full list.


Made with ❀️ for advancing AI in healthcare

About

LungsCareAI - AI-Powered Lung Analysis System

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors