AI-powered finance advisor project with:
- a Pyramid backend for the existing application flow
- a FastAPI ML service for training and inference APIs
- a React frontend for the user interface
Fastest way to run this project:
Updated:
- Docker images are now documented for Docker Hub usage.
.envshould be passed at runtime and is excluded from future Docker builds.
- Pull the Docker image:
docker pull n8nproject2026/finance-ai-advisor:latest- Start FastAPI backend:
docker run --name finance-fastapi --rm -p 8000:8000 --env-file backend/.env -e APP_MODE=fastapi n8nproject2026/finance-ai-advisor:latest- Start Pyramid backend in another terminal:
docker run --name finance-pyramid --rm -p 6543:6543 --env-file backend/.env -e APP_MODE=pyramid n8nproject2026/finance-ai-advisor:latest- Start frontend locally:
cd frontend
$env:VITE_PYRAMID_API_ORIGIN="http://localhost:6543"
$env:VITE_FASTAPI_API_ORIGIN="http://localhost:8000"
npm install
npm run dev- Open:
http://localhost:5173
For full Docker instructions, see Docker Hub Quickstart.
- Loan approval prediction using a trained scikit-learn pipeline
- Prediction history persisted in SQLite
- FastAPI endpoints for training, loading, saving, and prediction
- Automatic Swagger and ReDoc documentation
- Postman-friendly JSON request/response workflow
- AI financial file analysis endpoints
- Follow-up advisor chat endpoints
- Local scripts for backend, frontend, and test execution
FINANCE-AI-ADVISOR/
|-- backend/
| |-- finance_ai/
| | |-- fastapi_app.py
| | |-- services/
| | | `-- ml_workflow_service.py
| | `-- ml_models/
| | |-- train_model.py
| | |-- model.pkl
| | `-- loan_status_prediction.csv
| `-- tests/
|-- frontend/
|-- scripts/
| |-- start-backend.ps1
| |-- start-fastapi.ps1
| |-- start-frontend.ps1
| |-- train_expense_model.py
| `-- train_anomaly_model.py
`-- docs/
git clone <YOUR_GITHUB_REPO_URL>
cd FINANCE-AI-ADVISORcd backend
python -m venv venv
.\venv\Scripts\Activate.ps1
pip install -r requirements.txt
Copy-Item .env.example .env
cd ..cd frontend
npm install
cd ...\scripts\start-backend.ps1Default URL: http://localhost:6543
.\scripts\start-fastapi.ps1Default URL: http://localhost:8000
Docs:
http://localhost:8000/docshttp://localhost:8000/redoc
.\scripts\start-frontend.ps1Default URL: http://localhost:5173
.\scripts\start-all.ps1Published image:
n8nproject2026/finance-ai-advisor:latest
This image supports two backend modes:
APP_MODE=pyramidruns the Pyramid backend on port6543APP_MODE=fastapiruns the FastAPI backend on port8000
Important:
- The Docker image should not contain your private
.envfile. - Pass secrets and API keys at runtime with
--env-file backend/.envor-e KEY=value.
docker pull n8nproject2026/finance-ai-advisor:latestThis backend handles prediction and AI upload/chat fallback:
docker run --name finance-fastapi --rm -p 8000:8000 --env-file backend/.env -e APP_MODE=fastapi n8nproject2026/finance-ai-advisor:latestOpen in browser after startup:
http://localhost:8000/healthhttp://localhost:8000/docs
This backend handles the original app routes and prediction history dashboard:
docker run --name finance-pyramid --rm -p 6543:6543 --env-file backend/.env -e APP_MODE=pyramid n8nproject2026/finance-ai-advisor:latestOpen in browser after startup:
http://localhost:6543/
Open a new terminal:
cd frontend
$env:VITE_PYRAMID_API_ORIGIN="http://localhost:6543"
$env:VITE_FASTAPI_API_ORIGIN="http://localhost:8000"
npm install
npm run devThen open:
http://localhost:5173
- Run a loan prediction.
- Upload a
.csv,.txt,.json, or.xlsxfile in the AI advisor section. - Ask a follow-up question after analysis completes.
If you only want one backend:
FastAPI only:
docker run --name finance-fastapi --rm -p 8000:8000 --env-file backend/.env -e APP_MODE=fastapi n8nproject2026/finance-ai-advisor:latestPyramid only:
docker run --name finance-pyramid --rm -p 6543:6543 --env-file backend/.env -e APP_MODE=pyramid n8nproject2026/finance-ai-advisor:latestdocker stop finance-fastapi
docker stop finance-pyramid- If the frontend shows
Server returned non-JSON response (500), first confirm both backends are running. - If Docker Desktop is used, create two containers, not one. One container should use
APP_MODE=fastapi, and the other should useAPP_MODE=pyramid. - If AI upload/chat is used, make sure your runtime env file includes keys such as
GOOGLE_API_KEYwhen needed. - If you updated the image recently, pull again before starting:
docker pull n8nproject2026/finance-ai-advisor:latestdocker build -f backend/Dockerfile -t finance-ai-advisor:latest .docker login
docker build -f backend/Dockerfile -t n8nproject2026/finance-ai-advisor:latest .
docker push n8nproject2026/finance-ai-advisor:latestThe existing workflow is preserved:
- Train the model locally
- Save the model as a local artifact
- Load the saved model
- Send inputs for prediction
- Return results as API responses instead of terminal-only output
Active model artifact:
backend/finance_ai/ml_models/model.pkl
Training dataset:
backend/finance_ai/ml_models/loan_status_prediction.csv
cd backend
.\venv\Scripts\python.exe .\finance_ai\ml_models\train_model.py
cd ..python .\scripts\train_expense_model.py
python .\scripts\train_anomaly_model.pyThese wrappers now call the shared ML workflow service and save artifacts using their current filenames.
GET /health
GET /api/v1/models/status
POST /api/v1/models/train
Example body:
{
"model_name": "loan_approval",
"persist_model": true,
"load_after_train": true
}POST /api/v1/models/load
Example body:
{
"model_name": "loan_approval",
"force_reload": false
}POST /api/v1/models/save
Example body:
{
"model_name": "loan_approval"
}POST /api/v1/predict
The API accepts both original column names and snake_case field names.
Example body:
{
"ApplicantIncome": 5000,
"CoapplicantIncome": 1500,
"LoanAmount": 120,
"Credit_History": 1
}Postman-friendly example:
{
"applicant_income": 5000,
"coapplicant_income": 1500,
"loan_amount": 120,
"credit_history": 1
}Use:
- Method:
POST - URL:
http://localhost:8000/api/v1/predict - Header:
Content-Type: application/json
Sample response shape:
{
"status": "success",
"message": "Prediction generated successfully",
"data": {
"prediction": "Approved",
"probability_approved": 0.84
}
}Project checks:
.\scripts\run-tests.ps1Focused backend test:
cd backend
.\venv\Scripts\python.exe -m unittest tests.test_fastapi_ml_api -v
cd ..Common local ports:
- Pyramid backend:
6543 - FastAPI ML API:
8000 - React frontend:
5173
To inspect active ports:
Get-NetTCPConnection -State Listen | Sort-Object LocalPort | Select-Object LocalAddress, LocalPort, OwningProcess- The original Pyramid backend was not replaced.
- FastAPI was added as a separate ML API layer for REST usage and later deployment.
- Model artifacts are stored locally and loaded from disk.
- The FastAPI layer is designed to be tested directly from Swagger UI or Postman.