An adaptive testing backend built with FastAPI and MongoDB that dynamically adjusts question difficulty based on student performance and generates personalized study recommendations.
This project implements a 1-Dimensional Adaptive Testing System that dynamically selects questions based on a student's previous responses. The goal is to estimate the student's ability level while presenting questions of appropriate difficulty.
The system uses a FastAPI backend, MongoDB database, and a simple adaptive algorithm inspired by Item Response Theory (IRT). After completing the test, the system generates a personalized study plan based on the student's weaknesses.
The system follows a modular backend architecture.
Client (Browser / Swagger UI)
↓
FastAPI Backend
↓
Adaptive Algorithm
↓
MongoDB Database
↓
Question Dataset
- FastAPI – Handles API endpoints
- MongoDB – Stores questions and user sessions
- Adaptive Algorithm – Adjusts difficulty dynamically
- AI Study Plan Generator – Suggests learning improvements
| Component | Technology |
|---|---|
| Backend | FastAPI (Python) |
| Database | MongoDB |
| Language | Python |
| API Testing | Swagger UI |
| AI Logic | Simple rule-based generator |
Each question contains:
{
question: string
options: list
correct_answer: string
difficulty: float (0.1 – 1.0)
topic: string
tags: list
}
Example:
{
"question": "Synonym of 'abundant'?",
"options": ["scarce","plentiful","rare","tiny"],
"correct_answer": "plentiful",
"difficulty": 0.5,
"topic": "Vocabulary",
"tags": ["synonym"]
}
Tracks a student's progress:
{
ability_score: float
questions_answered: list
topics_wrong: list
}
The system begins with a baseline ability:
ability_score = 0.5
If the answer is correct:
ability = ability + 0.1
If incorrect:
ability = ability - 0.1
Ability is bounded between:
0.1 ≤ ability ≤ 1.0
The next question is chosen by selecting the question whose difficulty is closest to the student's current ability score.
difficulty ≈ ability_score
This ensures the test adapts dynamically to the student's skill level.
POST /start-session
Creates a new session and returns the first question.
Example response:
{
"session_id": "...",
"question": {...}
}
POST /submit-answer
Inputs:
session_id
question_id
answer
Outputs:
{
"correct": true/false,
"ability": 0.6,
"next_question": {...}
}
Start Session
↓
Return Question
↓
User Submits Answer
↓
Update Ability Score
↓
Select Next Question
↓
Repeat Until 10 Questions
After 10 questions, the system generates a study plan.
Once the test is complete, the system analyzes the student's weak topics and generates a 3-step learning plan.
Example output:
{
"message": "Test completed",
"ability": 0.7,
"study_plan": {
"step1": "Review concepts in Vocabulary",
"step2": "Practice medium difficulty questions",
"step3": "Take another adaptive test"
}
}
pip install -r requirements.txt
mongod
python seed_questions.py
This inserts 20 GRE-style questions.
python -m uvicorn app.main:app --reload
http://127.0.0.1:8000/docs
Test endpoints directly in Swagger UI.
adaptive_test_engine
│
├── app
│ ├── main.py
│ ├── routes.py
│ ├── adaptive.py
│ ├── database.py
│ ├── ai_plan.py
│ └── models.py
│
├── seed_questions.py
├── requirements.txt
└── README.md
AI tools such as ChatGPT were used during development to:
- accelerate FastAPI boilerplate creation
- design MongoDB schema
- implement adaptive question selection logic
- debug API serialization issues
- generate structured documentation
However, debugging database serialization issues and integrating the adaptive algorithm required manual validation and testing.
Possible extensions include:
- Full Item Response Theory implementation
- Better difficulty calibration
- Frontend adaptive test interface
- Real LLM-based personalized study plans
- User authentication and progress tracking
This project demonstrates how adaptive testing systems can dynamically adjust difficulty to estimate student ability efficiently. The prototype integrates database design, API architecture, and algorithmic decision-making to simulate a modern intelligent assessment engine.