Skip to content

NeuroSense - based on previous state-of-the-art (SOTA) studies, that utilizes a Gaussian mixture model for detecting and analyzing foreground movements, particularly focusing on involuntary twitch-like motions known as fasciculations. This tool aids in identifying these movements accurately, supporting research in neuromuscular conditions.

License

Notifications You must be signed in to change notification settings

Dheeraj791/NeuroSense

Repository files navigation

Logo

MIT License Platforms

NeuroSense

Fasciculations are small, involuntary muscle twitches that can be early signs of serious conditions like Motor Neuron Disease (MND). Early detection is important for timely diagnosis and treatment. NeuroSense is a tool designed to detect these twitches using video analysis.

NeuroSense – A tool based on Gaussian Mixture Models for detecting involuntary muscle twitch movements, known as fasciculations, through foreground detection.

Motivation

We initially developed the system in MATLAB [Bibbings et al. (2019)] using Gaussian Mixture Models (GMM) for background subtraction. While effective, MATLAB has limitations in expanding and testing advanced methods. Python, with its rich set of machine learning and computer vision libraries, offers a better platform for future development.

We have now successfully migrated our code to Python using OpenCV’s BackgroundSubtractorMOG2, which is also GMM-based. The results are promising, showing clear detection of subtle twitch movements.

With Python, NeuroSense can grow into a smarter, more accurate tool, helping clinicians and other researchers identify fasciculations earlier.

Our goal is to support earlier diagnosis of MND and related conditions through accessible and reliable technology.

Product Highlights

Product Preview

Architecture

Product Preview

Installation

Follow the steps below to set up and run the Flask application locally:

  1. Clone the Repository

      git clone https://github.com/Dheeraj791/NeuroSense.git
      cd NeuroSense
  2. (Optional) Create and Activate a Virtual Environment

    • macOS/Linux:

      python3 -m venv venv
      source venv/bin/activate
    • Windows (CMD):

      python -m venv venv
      venv\Scripts\activate
    • Windows (PowerShell):

      python -m venv venv
      venv\Scripts\Activate.ps1
  3. Install Dependencies

    pip install -r requirements.txt
  4. Set Environment Variables

    • macOS/Linux:

      export FLASK_APP=app.py
      export FLASK_ENV=development
    • Windows (CMD):

      set FLASK_APP=app.py
      set FLASK_ENV=development
    • Windows (PowerShell):

      $env:FLASK_APP = "app.py"
      $env:FLASK_ENV = "development"
  5. Run the Application

    flask run
  6. Visit the App

    Open your browser and go to:
    http://127.0.0.1:5000

Features

Single Video Upload

Upload a single ultrasound video along with its muscle group and probe orientation. This feature is designed to support large ultrasound files (GB-scale).

A small 1-2 MB test sample video is included for functionality testing purposes.

Bulk Video Upload via Excel

Upload an entire folder of ultrasound videos using select folder functionality, automatically generate an Excel template, fill in details such as muscle group and probe orientation, and submit it for processing. This enables batch analysis for high-throughput use cases or larger clinical datasets.

Live Video Previews & Fasciculation Visualization

After processing, the application provides live previews of all videos. Detected fasciculations are overlaid in real time, and an interactive graph shows their distribution across the video timeline for easy verification and interpretation.

Fullscreen Viewing Mode

All processed videos can be expanded to fullscreen, enhancing clarity during analysis and review of subtle fasciculation patterns.

Contributing

Contributions are always welcome!

see contributing.md for ways to get started.

please adhere to this project's code of conduct.

Related Paper

Here are some related projects

Bibbings et al. (2019)

Repo structure


├─ .DS_Store
├─ .github
│  └─ workflows
│     └─ paper.yml
├─ .gitignore
├─ Core
│  ├─ Readme.md
│  ├─ __init__.py
│  ├─ processing.py
│  └─ setup_ffmpeg.py
├─ LICENSE
├─ README.md
├─ app.py
├─ code_of_conduct.md
├─ contributing.md
├─ joss-paper
│  └─ figures
│     └─ hl_architecture.png
├─ paper.bib
├─ paper.md
├─ requirements.txt
├─ static
│  ├─ .DS_Store
│  ├─ css
│  │  ├─ alert.css
│  │  ├─ bulk_results.css
│  │  ├─ home.css
│  │  ├─ result.css
│  │  └─ style.css
│  ├─ images
│  │  ├─ NEuroSense.png
│  │  ├─ architecture.png
│  │  ├─ background.jpg
│  │  └─ image_product.png
│  ├─ js
│  │  ├─ .DS_Store
│  │  ├─ alert.js
│  │  ├─ config.js
│  │  └─ script.js
│  └─ logo
│     ├─ .DS_Store
│     └─ logo.png
├─ templates
│  ├─ bulk_results.html
│  ├─ index.html
│  └─ result.html
└─ tests
   ├─ .DS_Store
   ├─ Readme.md
   ├─ __init__.py
   ├─ sample_data
   │  ├─ README.md
   │  ├─ test_data_template.xlsx
   │  └─ test_video.mp4
   ├─ test_ui.py
   └─ test_util.py

Feedback

If you have any feedback, please reach out to us at pandey.dheeraj457@gmail.com

About

NeuroSense - based on previous state-of-the-art (SOTA) studies, that utilizes a Gaussian mixture model for detecting and analyzing foreground movements, particularly focusing on involuntary twitch-like motions known as fasciculations. This tool aids in identifying these movements accurately, supporting research in neuromuscular conditions.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published