Skip to content

ad-546/adaptive-diagnostic-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Driven Adaptive Diagnostic Engine

An adaptive testing backend built with FastAPI and MongoDB that dynamically adjusts question difficulty based on student performance and generates personalized study recommendations.

Overview

This project implements a 1-Dimensional Adaptive Testing System that dynamically selects questions based on a student's previous responses. The goal is to estimate the student's ability level while presenting questions of appropriate difficulty.

The system uses a FastAPI backend, MongoDB database, and a simple adaptive algorithm inspired by Item Response Theory (IRT). After completing the test, the system generates a personalized study plan based on the student's weaknesses.


System Architecture

The system follows a modular backend architecture.

Client (Browser / Swagger UI)
        ↓
FastAPI Backend
        ↓
Adaptive Algorithm
        ↓
MongoDB Database
        ↓
Question Dataset

Components

  • FastAPI – Handles API endpoints
  • MongoDB – Stores questions and user sessions
  • Adaptive Algorithm – Adjusts difficulty dynamically
  • AI Study Plan Generator – Suggests learning improvements

Tech Stack

Component Technology
Backend FastAPI (Python)
Database MongoDB
Language Python
API Testing Swagger UI
AI Logic Simple rule-based generator

Database Design

Questions Collection

Each question contains:

{
 question: string
 options: list
 correct_answer: string
 difficulty: float (0.1 – 1.0)
 topic: string
 tags: list
}

Example:

{
 "question": "Synonym of 'abundant'?",
 "options": ["scarce","plentiful","rare","tiny"],
 "correct_answer": "plentiful",
 "difficulty": 0.5,
 "topic": "Vocabulary",
 "tags": ["synonym"]
}

User Sessions Collection

Tracks a student's progress:

{
 ability_score: float
 questions_answered: list
 topics_wrong: list
}

Adaptive Algorithm Logic

The system begins with a baseline ability:

ability_score = 0.5

Ability Update Rule

If the answer is correct:

ability = ability + 0.1

If incorrect:

ability = ability - 0.1

Ability is bounded between:

0.1 ≤ ability ≤ 1.0

Question Selection

The next question is chosen by selecting the question whose difficulty is closest to the student's current ability score.

difficulty ≈ ability_score

This ensures the test adapts dynamically to the student's skill level.


API Endpoints

1. Start Test Session

POST /start-session

Creates a new session and returns the first question.

Example response:

{
 "session_id": "...",
 "question": {...}
}

2. Submit Answer

POST /submit-answer

Inputs:

session_id
question_id
answer

Outputs:

{
 "correct": true/false,
 "ability": 0.6,
 "next_question": {...}
}

Adaptive Testing Flow

Start Session
     ↓
Return Question
     ↓
User Submits Answer
     ↓
Update Ability Score
     ↓
Select Next Question
     ↓
Repeat Until 10 Questions

After 10 questions, the system generates a study plan.


AI Study Plan Generation

Once the test is complete, the system analyzes the student's weak topics and generates a 3-step learning plan.

Example output:

{
 "message": "Test completed",
 "ability": 0.7,
 "study_plan": {
   "step1": "Review concepts in Vocabulary",
   "step2": "Practice medium difficulty questions",
   "step3": "Take another adaptive test"
 }
}

How to Run the Project

1 Install Dependencies

pip install -r requirements.txt

2 Start MongoDB

mongod

3 Seed the Database

python seed_questions.py

This inserts 20 GRE-style questions.


4 Start the API Server

python -m uvicorn app.main:app --reload

5 Open API Documentation

http://127.0.0.1:8000/docs

Test endpoints directly in Swagger UI.


Project Structure

adaptive_test_engine
│
├── app
│   ├── main.py
│   ├── routes.py
│   ├── adaptive.py
│   ├── database.py
│   ├── ai_plan.py
│   └── models.py
│
├── seed_questions.py
├── requirements.txt
└── README.md

AI Development Log

AI tools such as ChatGPT were used during development to:

  • accelerate FastAPI boilerplate creation
  • design MongoDB schema
  • implement adaptive question selection logic
  • debug API serialization issues
  • generate structured documentation

However, debugging database serialization issues and integrating the adaptive algorithm required manual validation and testing.


Future Improvements

Possible extensions include:

  • Full Item Response Theory implementation
  • Better difficulty calibration
  • Frontend adaptive test interface
  • Real LLM-based personalized study plans
  • User authentication and progress tracking

Conclusion

This project demonstrates how adaptive testing systems can dynamically adjust difficulty to estimate student ability efficiently. The prototype integrates database design, API architecture, and algorithmic decision-making to simulate a modern intelligent assessment engine.

About

An adaptive testing backend built with FastAPI and MongoDB that dynamically adjusts question difficulty based on student performance and generates personalized study recommendations.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages