This document outlines how the React/Next.js frontend should communicate with the FastAPI backend. It covers authentication, text chat, the new multilingual voice chat, gamification tasks, and dashboard analytics.
When running locally: http://127.0.0.1:8000
The backend relies on Supabase for authentication. You do not need to hit the backend to login/register. Instead:
- Use the Supabase JS client on the frontend to authenticate the user.
- Extract the JWT access token from the Supabase session.
- Pass this token in the
Authorizationheader of every backend request.
Example Header:
Authorization: Bearer eyJhbGciOiJIUzI...POST /api/chat
Sends the user's text message to the AI. The backend processes the emotion, checks for crisis triggers, and returns a therapeutic response alongside gamified tasks.
{
"session_id": "uuid-for-this-conversation",
"message": "I am feeling extremely overwhelmed with my university assignments."
}{
"session_id": "uuid-for-this-conversation",
"response": "I hear how overwhelmed you are feeling right now...",
"suggested_tasks": [
"Take a 5-minute breathing break",
"List 3 things you are grateful for"
],
"detected_language": "en",
"emotions": [
{"label": "nervousness", "score": 0.82},
{"label": "sadness", "score": 0.45}
],
"distress_scores": [
{"label": "negative", "score": 0.85}
],
"dominant_emotion": "nervousness",
"crisis": {
"crisis_trigger": false,
"crisis_reasons": [],
"helplines": []
}
}Notes for Frontend:
- If
crisis.crisis_triggeristrue, immediately display a persistent banner showing thehelplines. - Use
dominant_emotionto change the UI colors or mascot expressions (e.g., blue for sadness, yellow for joy). - The
suggested_tasksare automatically saved to the user's daily tasks table in Supabase.
POST /api/chat/voice
Sends user audio (in English or Hindi) to the AI. Returns an Audio Stream (MP3) of the AI speaking logically in that language (using Cartesia's Sonic engine), along with metadata in the HTTP response headers.
Unlike the text endpoint, this requires multipart/form-data.
const formData = new FormData();
formData.append("audio_file", audioBlob, "recording.webm"); // The recorded audio file
formData.append("session_id", "uuid-for-this-conversation");
const response = await fetch("http://127.0.0.1:8000/api/chat/voice", {
method: "POST",
headers: {
"Authorization": `Bearer ${token}`
// DO NOT set Content-Type manually. Fetch sets it automatically with the boundary.
},
body: formData
});- Body: The response body is an
audio/mpegbinary Blob (the AI speaking). - Headers: Metadata is sent in custom headers. CRITICAL: These headers are URL-encoded on the backend to prevent Unicode crashes with Indian scripts (like Devanagari). You MUST decode them using
decodeURIComponent.
// 1. Get the MP3 audio
const audioBlob = await response.blob();
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play();
// 2. Extract and decode the metadata headers
const userTranscript = decodeURIComponent(response.headers.get("X-User-Transcript"));
const aiResponseText = decodeURIComponent(response.headers.get("X-AI-Response"));
const detectedLang = response.headers.get("X-AI-Language"); // e.g. "hi", "en"
// Tasks are returned as a JSON string
const tasksHeader = response.headers.get("X-AI-Tasks");
const tasks = tasksHeader ? JSON.parse(decodeURIComponent(tasksHeader)) : [];- English (
en): Supported natively. The AI understands English and responds with a realistic English voice. - Hindi (
hi): Supported natively. The AI understands Hindi and responds with a realistic Hindi voice (sonic-multilingual).
Frontend Dev Note: Never hardcode UI assumptions about the output language. Always read the X-AI-Language header to know whether the audio you're playing back is in hi or en.
GET /api/tasks/today
Fetches all tasks assigned to the user for the current day (this includes tasks generated by the AI during chat/voice interactions).
Success Response (200 OK):
[
{
"id": "uuid-1234",
"user_id": "uuid-5678",
"description": "Take a 5-minute breathing break",
"completed": false,
"date": "2023-10-25"
}
]POST /api/tasks/complete
Marks a specific task as completed.
Request Body (JSON):
{
"task_id": "uuid-1234",
"completed": true
}GET /api/dashboard/{session_id}
Provides historical mood data (currently 14 days of mock data) to render the user's progress graphs on the dashboard page.
Success Response (200 OK):
[
{
"date": "2023-10-11",
"emotion": "joy",
"intensity": 0.85
},
{
"date": "2023-10-12",
"emotion": "sadness",
"intensity": 0.60
}
]Frontend Implementation Notes:
- Plot
dateon the X-axis andintensity(0.0 to 1.0) on the Y-axis using a charting library (like Recharts). - Color-code data points based on the
emotionlabel.