Skip to content
/ ResQ Public

The ResQ platform implements a robust multi-agent architecture designed to streamline and optimize disaster response operations.

Notifications You must be signed in to change notification settings

Isara-Li/ResQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

159 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ResQ: AI-Powered Disaster Response System

Resq

🌐 Application Overview

The frontend of ResQ is a mobile-responsive React application developed to ensure seamless usability across all key user groups, even in high-stress environments. It provides intuitive interfaces tailored to each role while maintaining accessibility, clarity, and performance.

πŸ”Ή Key User Interfaces

1. Affected Individuals

  • Submit Requests: Through natural language text,voice or image uploads.
  • Real-Time Status Updates: See progress and assignment of their request.
  • Feedbacks: Submit feedbacks for completed tasks.

Resq

2. Volunteers

  • Registration & Onboarding: Sign up, share skills, and receive assignments.
  • Task Dashboard: View assigned missions, report progress, get assigned to desired requests,
  • Communication Hub: Chat with responders or admin for guidance.
  • Step-by-Step Protocols: AI-generated instructions based on incident type and urgency.

Resq

3. First Responders

  • Mission Dashboard: Map-based interface showing geolocated incidents and task priority for assigned and completed tasks.
  • Step-by-Step Protocols: AI-generated instructions based on incident type and urgency.

Resq

4. Government Help Center/Admin

  • Command Dashboard:
    • View all active requests, responder locations, and resource inventories.
    • Add first reponders and resources to the system and manage all tasks assigned by the system.
    • Visual metrics on request aging, fulfillment rates, and task distribution.

Resq

Architecture Overview: Agentic Workflow

The ResQ platform implements a robust multi-agent architecture designed to streamline and optimize disaster response operations. It leverages LangGraph for agent workflow orchestration and Google Gemini for intelligent, multimodal input processing.

πŸ”§ Key Components

1. Request Processor Agent

  • Purpose: Analyzes incoming disaster reports and converts unstructured data into structured formats.
  • Functionality:
    • Extracts:
      • Type of need (e.g., food, water, medical, rescue, shelter, transportation)
      • Urgency level (low, medium, high, critical)
      • Priority score (range: 0–100)
      • Location description
      • Estimated number of affected individuals

2. Request Priority Queue

  • Purpose: Maintains a dynamic, sorted queue of incoming disaster requests.
  • Functionality:
    • Priority-based scheduling
    • Aging mechanism to avoid starvation of lower-priority tasks
    • Automatic escalation of older requests

3. Task Coordinator Agent

  • Purpose: Assigns tasks to volunteers or first responders based on dynamic criteria.
  • Functionality:
    • Matches personnel using:
      • Distance to incident
      • Skill and role compatibility
      • Urgency and severity
    • Delegates tasks and creates:
      • Volunteer instructions
      • First responder protocols
      • Affected individual guidance

4. Resource Assignment Agent

  • Purpose: Allocates and tracks physical resources.
  • Functionality:
    • Maps identified needs to available resource types
    • Locates the nearest resources (food, water, medical kits, etc.)
    • Updates and manages inventory

5. Instruction Guidance Agent

  • Purpose: Provides personalized, step-by-step instructions tailored to different stakeholders in a disaster event.
  • Functionality:
    • Supervises three specialized React agents to generate:
      • Affected Individual Guidance: Safety steps, resource access, relocation instructions.
      • First Responder Protocols: Emergency procedures, site coordination, priority handling.
      • Volunteer Safety & Compliance Steps: Safe participation, logistical alignment, communication best practices.
    • Evidence Integration:
      • Uses Tavily API to supplement each guidance flow with real-world context and location-specific details.
    • Urgency-Adaptive Model Selection:
      • For Critical urgency: Uses gemini-2.5-flash-preview-05-20 for high-speed, accurate outputs.
      • For other cases: Defaults to gemini-2.0-flash.

6. Queue Monitor Agent

  • Purpose: Monitors task queues and responder availability.
  • Functionality:
    • Periodically scans responder availability
    • Assigns tasks to newly available responders
    • Updates request status to reflect task progress

7. Request Queue Worker

  • Purpose: Continuously processes prioritized requests.
  • Functionality:
    • Pulls requests from the priority queue
    • Monitors system load and throttles accordingly
    • Handles retries and error recovery

8. Disaster Response Workflow

  • Purpose: Orchestrates the flow between agents and manages the full disaster response lifecycle.
  • Functionality:
    • Implements a LangGraph state machine
    • Coordinates agent interactions
    • Manages errors, timeouts, and edge cases
    • Integrates with LangSmith for:
      • Observability
      • Tracing
      • Feedback loop integration

πŸ€– AI Capabilities

  • Natural Language Understanding: Extract structured data from free-text input
  • Image Analysis: Classify and identify disaster-related visuals
  • Speech-to-Text: Convert voice reports to structured information
  • Priority Estimation: Auto-classify urgency and criticality
  • Smart Matching: Match responders and resources intelligently
  • Instruction Generation: Create contextual guidance for:
    • First responders (e.g., emergency protocols)
    • Volunteers (e.g., coordination tasks)
    • Affected individuals (e.g., safety steps)

πŸ§ͺ Synthetic Data Generation and Evaluation Process for Disaster Response Coordination

To validate and evaluate the extraction capabilities of our AI-powered Disaster Response Coordination Web App, we implemented a structured agentic pipeline to generate and evaluate synthetic user requests using LLM-driven agents. The process followed these main stages:

πŸ—οΈ Architecture Overview

The evaluation pipeline involves the following agents, working in a supervised sequence:

RequestGeneratorAgent
↓
ExtractionSupervisorAgent
  β”œβ”€β”€ RequestExtractionAgent
  └── ExtractionJudgeAgent

1. Synthetic Data Generation Using Agentic Workflow

We created synthetic user request data for five disaster types: ["flood", "earthquake", "wildfire", "landslide", "fire"].

  • We used a supervised agentic system built with React Agents, orchestrated as follows:

    • RequestGeneratorAgent: Created 25 synthetic requests per disaster type, totaling 125 requests.

    • RequestExtractionAgent: For each generated request, extracted the following structured fields using an LLM:

      • "extracted_need"
      • "urgency_level"
      • "priority_score"
      • "location_description"
      • "estimated_people_affected"
      • "specific_requirements"

2. Hallucination Filtering Using Judgment Agent

  • We used a JudgeAgent to assess the accuracy and realism of the extracted outputs.
  • The JudgmentAgent returned a judgment score between 0 and 1.
  • We only retained extractions with a perfect judgment score of 1.0, effectively eliminating hallucinations and ensuring data quality.
  • This reduced our dataset from 125 down to 106 high-quality synthetic requests.

3. Multi-Level Evaluation with DeepEval Framework

To validate the effectiveness and precision of our extraction system, we used the DeepEval framework, applying the G-Eval matrix for custom evaluation metrics.

Evaluation Steps:

We performed evaluations on two levels:

  • Overall Field Extraction Accuracy: Evaluated the quality of all fields returned by the RequestExtractionAgent as a complete set.

  • Field-Specific Evaluation: Each field was evaluated individually for semantic correctness, consistency, and grounding:

    • "extracted_need"
    • "specific_requirements"
    • "urgency_level"
    • "priority_score"
    • "location_description"
    • "estimated_people_affected"

This provided granular insights into which aspects of the extraction pipeline performed best and where improvements were needed.

Summary of Key Points:

Component Description
Disaster Types Flood, Earthquake, Wildfire, Landslide, Fire
Initial Requests 125 total (25 per type)
Valid Requests After Judgment Agent 106
Key Extraction Fields extracted_need, urgency_level, priority_score, location_description, estimated_people_affected, specific_requirements, overall
Evaluation Framework DeepEval with G-Eval
Agentic System Built using React Agents with a supervision loop
Judgment Filter Removed hallucinations based on a perfect score threshold of 1.0

πŸ›  Technical Implementation

Technology Role
LangGraph Agent workflow orchestration
Google Gemini NLP, image, and speech processing
LangSmith Workflow tracing and feedback collection
FastAPI Backend API framework
PostgreSQL Persistent data storage
SQLAlchemy ORM for PostgreSQL
DeepEval Evaluation and scoring of generated responses
Tavily API Real-world evidence and contextual grounding for steps

βœ… Production-Grade Features

  • Offline Mode: Operates in low/no internet environments

    Offline Mode
  • Aging Queue Mechanism: Prevents request starvation

  • Queue Monitoring: Manages responder workloads efficiently

  • Graceful Error Handling: Built-in fallback logic

  • Observability: Full workflow tracing via LangSmith

    LangSmith Observability
  • Feedback Loop: Incorporates human feedback for model fine-tuning

☁️ Infrastructure and Deployment

The PostgreSQL database is hosted using Google Cloud SQL, ensuring managed and resilient data storage. For image analysis, the CLIP model runs on a dedicated Google Cloud VM, providing efficient multimodal processing capabilities. All user-uploaded media files, including images are stored in Google Cloud Storage (GCS), enabling secure and low-latency access across the platform.

πŸ”Œ Integration Points

  • Mobile Apps for affected citizens and volunteers
  • Command Center Dashboards for real-time monitoring
  • Analytics Interfaces for government and NGOs
  • Logistics Systems for inventory and supply chain coordination.

Getting Started

To run the ResQ application locally, follow these steps:

  1. Clone the Repository:
    git clone https://github.com/Isara-Li/ResQ.git
  2. Navigate to the Project Directory:
     cd ResQ
  3. Install Dependencies for the Backend:
    cd backend
    pip install -r requirements.txt
  4. Install Dependencies for the Frontend:
    cd ../frontend
    npm install
  5. Set Up Environment Variables: Create a .env file in the backend and frontend directory and add the necessary environment variables, as specified in the .env.template file.
  6. Start the backend server using:
    make run
  7. Start the frontend using:
    npm run dev
  8. Important: Since our database and Firebase are running in the cloud, you can request access by sending an email to liyanageisara@gmail.com.

About

The ResQ platform implements a robust multi-agent architecture designed to streamline and optimize disaster response operations.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •