The frontend of ResQ is a mobile-responsive React application developed to ensure seamless usability across all key user groups, even in high-stress environments. It provides intuitive interfaces tailored to each role while maintaining accessibility, clarity, and performance.
- Submit Requests: Through natural language text,voice or image uploads.
- Real-Time Status Updates: See progress and assignment of their request.
- Feedbacks: Submit feedbacks for completed tasks.
- Registration & Onboarding: Sign up, share skills, and receive assignments.
- Task Dashboard: View assigned missions, report progress, get assigned to desired requests,
- Communication Hub: Chat with responders or admin for guidance.
- Step-by-Step Protocols: AI-generated instructions based on incident type and urgency.
- Mission Dashboard: Map-based interface showing geolocated incidents and task priority for assigned and completed tasks.
- Step-by-Step Protocols: AI-generated instructions based on incident type and urgency.
- Command Dashboard:
- View all active requests, responder locations, and resource inventories.
- Add first reponders and resources to the system and manage all tasks assigned by the system.
- Visual metrics on request aging, fulfillment rates, and task distribution.
The ResQ platform implements a robust multi-agent architecture designed to streamline and optimize disaster response operations. It leverages LangGraph for agent workflow orchestration and Google Gemini for intelligent, multimodal input processing.
- Purpose: Analyzes incoming disaster reports and converts unstructured data into structured formats.
- Functionality:
- Extracts:
- Type of need (e.g., food, water, medical, rescue, shelter, transportation)
- Urgency level (low, medium, high, critical)
- Priority score (range: 0β100)
- Location description
- Estimated number of affected individuals
- Extracts:
- Purpose: Maintains a dynamic, sorted queue of incoming disaster requests.
- Functionality:
- Priority-based scheduling
- Aging mechanism to avoid starvation of lower-priority tasks
- Automatic escalation of older requests
- Purpose: Assigns tasks to volunteers or first responders based on dynamic criteria.
- Functionality:
- Matches personnel using:
- Distance to incident
- Skill and role compatibility
- Urgency and severity
- Delegates tasks and creates:
- Volunteer instructions
- First responder protocols
- Affected individual guidance
- Matches personnel using:
- Purpose: Allocates and tracks physical resources.
- Functionality:
- Maps identified needs to available resource types
- Locates the nearest resources (food, water, medical kits, etc.)
- Updates and manages inventory
- Purpose: Provides personalized, step-by-step instructions tailored to different stakeholders in a disaster event.
- Functionality:
- Supervises three specialized React agents to generate:
- Affected Individual Guidance: Safety steps, resource access, relocation instructions.
- First Responder Protocols: Emergency procedures, site coordination, priority handling.
- Volunteer Safety & Compliance Steps: Safe participation, logistical alignment, communication best practices.
- Evidence Integration:
- Uses Tavily API to supplement each guidance flow with real-world context and location-specific details.
- Urgency-Adaptive Model Selection:
- For Critical urgency: Uses
gemini-2.5-flash-preview-05-20for high-speed, accurate outputs. - For other cases: Defaults to
gemini-2.0-flash.
- For Critical urgency: Uses
- Supervises three specialized React agents to generate:
- Purpose: Monitors task queues and responder availability.
- Functionality:
- Periodically scans responder availability
- Assigns tasks to newly available responders
- Updates request status to reflect task progress
- Purpose: Continuously processes prioritized requests.
- Functionality:
- Pulls requests from the priority queue
- Monitors system load and throttles accordingly
- Handles retries and error recovery
- Purpose: Orchestrates the flow between agents and manages the full disaster response lifecycle.
- Functionality:
- Implements a LangGraph state machine
- Coordinates agent interactions
- Manages errors, timeouts, and edge cases
- Integrates with LangSmith for:
- Observability
- Tracing
- Feedback loop integration
- Natural Language Understanding: Extract structured data from free-text input
- Image Analysis: Classify and identify disaster-related visuals
- Speech-to-Text: Convert voice reports to structured information
- Priority Estimation: Auto-classify urgency and criticality
- Smart Matching: Match responders and resources intelligently
- Instruction Generation: Create contextual guidance for:
- First responders (e.g., emergency protocols)
- Volunteers (e.g., coordination tasks)
- Affected individuals (e.g., safety steps)
To validate and evaluate the extraction capabilities of our AI-powered Disaster Response Coordination Web App, we implemented a structured agentic pipeline to generate and evaluate synthetic user requests using LLM-driven agents. The process followed these main stages:
The evaluation pipeline involves the following agents, working in a supervised sequence:
RequestGeneratorAgent
β
ExtractionSupervisorAgent
βββ RequestExtractionAgent
βββ ExtractionJudgeAgentWe created synthetic user request data for five disaster types:
["flood", "earthquake", "wildfire", "landslide", "fire"].
-
We used a supervised agentic system built with React Agents, orchestrated as follows:
-
RequestGeneratorAgent: Created 25 synthetic requests per disaster type, totaling 125 requests.
-
RequestExtractionAgent: For each generated request, extracted the following structured fields using an LLM:
"extracted_need""urgency_level""priority_score""location_description""estimated_people_affected""specific_requirements"
-
- We used a JudgeAgent to assess the accuracy and realism of the extracted outputs.
- The JudgmentAgent returned a judgment score between 0 and 1.
- We only retained extractions with a perfect judgment score of 1.0, effectively eliminating hallucinations and ensuring data quality.
- This reduced our dataset from 125 down to 106 high-quality synthetic requests.
To validate the effectiveness and precision of our extraction system, we used the DeepEval framework, applying the G-Eval matrix for custom evaluation metrics.
We performed evaluations on two levels:
-
Overall Field Extraction Accuracy: Evaluated the quality of all fields returned by the
RequestExtractionAgentas a complete set. -
Field-Specific Evaluation: Each field was evaluated individually for semantic correctness, consistency, and grounding:
"extracted_need""specific_requirements""urgency_level""priority_score""location_description""estimated_people_affected"
This provided granular insights into which aspects of the extraction pipeline performed best and where improvements were needed.
| Component | Description |
|---|---|
| Disaster Types | Flood, Earthquake, Wildfire, Landslide, Fire |
| Initial Requests | 125 total (25 per type) |
| Valid Requests After Judgment Agent | 106 |
| Key Extraction Fields | extracted_need, urgency_level, priority_score, location_description, estimated_people_affected, specific_requirements, overall |
| Evaluation Framework | DeepEval with G-Eval |
| Agentic System | Built using React Agents with a supervision loop |
| Judgment Filter | Removed hallucinations based on a perfect score threshold of 1.0 |
| Technology | Role |
|---|---|
| LangGraph | Agent workflow orchestration |
| Google Gemini | NLP, image, and speech processing |
| LangSmith | Workflow tracing and feedback collection |
| FastAPI | Backend API framework |
| PostgreSQL | Persistent data storage |
| SQLAlchemy | ORM for PostgreSQL |
| DeepEval | Evaluation and scoring of generated responses |
| Tavily API | Real-world evidence and contextual grounding for steps |
-
Offline Mode: Operates in low/no internet environments
-
Aging Queue Mechanism: Prevents request starvation
-
Queue Monitoring: Manages responder workloads efficiently
-
Graceful Error Handling: Built-in fallback logic
-
Observability: Full workflow tracing via LangSmith
-
Feedback Loop: Incorporates human feedback for model fine-tuning
The PostgreSQL database is hosted using Google Cloud SQL, ensuring managed and resilient data storage. For image analysis, the CLIP model runs on a dedicated Google Cloud VM, providing efficient multimodal processing capabilities. All user-uploaded media files, including images are stored in Google Cloud Storage (GCS), enabling secure and low-latency access across the platform.
- Mobile Apps for affected citizens and volunteers
- Command Center Dashboards for real-time monitoring
- Analytics Interfaces for government and NGOs
- Logistics Systems for inventory and supply chain coordination.
To run the ResQ application locally, follow these steps:
- Clone the Repository:
git clone https://github.com/Isara-Li/ResQ.git
- Navigate to the Project Directory:
cd ResQ - Install Dependencies for the Backend:
cd backend pip install -r requirements.txt - Install Dependencies for the Frontend:
cd ../frontend npm install - Set Up Environment Variables:
Create a
.envfile in thebackendandfrontenddirectory and add the necessary environment variables, as specified in the.env.templatefile. - Start the backend server using:
make run
- Start the frontend using:
npm run dev
- Important: Since our database and Firebase are running in the cloud, you can request access by sending an email to liyanageisara@gmail.com.




