Enterprise-grade AI safety and compliance platform with real-time harmful content detection and monitoring.
NIMEdge is an AI governance platform that detects and blocks harmful content in real-time. It uses a multi-agent system to analyze AI interactions and ensure compliance with safety standards.
- Violence Detection: Blocks violent content with 9.8/10 severity scoring
- Real-time Analysis: Instant detection and blocking of harmful prompts
- Multi-Agent System: PolicyEnforcer, AuditLogger, ResponseAgent working together
- Live Monitor: Real-time interaction monitoring with manual refresh
- Dashboard Analytics: Statistics and insights on blocked/approved content
- Graph Visualization: 2D interactive force graph showing data relationships
- Audit Logs: Comprehensive logging of all interactions and agent actions
- Auto-Refresh: Automatic updates every 10 seconds for real-time monitoring
- Neo4j Database: Graph database for storing interactions and violations
- Permanent Storage: All interactions and violations stored permanently
- Relationship Mapping: Complete audit trails with agent actions
- Database Management: Clear and reset database from Settings page
- Frontend: React 18, TypeScript, Tailwind CSS, Vite
- Database: Neo4j Aura (Graph Database)
- APIs: OpenAI GPT-3.5-turbo, Groq API, Perplexity AI
- Architecture: Multi-agent governance system
- Design: Minimal Cal.com-inspired theme
- Node.js 18+
- Neo4j Aura account (optional)
- API keys for OpenAI, Groq, Perplexity (optional)
git clone https://github.com/nihalnihalani/NIMEdge.git
cd NIMEdge
npm installCopy the example environment file and configure your API keys:
cp .env.example .envThen edit .env with your actual API keys:
# OpenAI API Configuration
VITE_OPENAI_API_KEY=your_openai_api_key_here
# Perplexity API Configuration
VITE_PERPLEXITY_API_KEY=your_perplexity_api_key_here
# Neo4j Configuration
VITE_NEO4J_URI=your_neo4j_uri_here
VITE_NEO4J_USERNAME=your_neo4j_username_here
VITE_NEO4J_PASSWORD=your_neo4j_password_here
VITE_NEO4J_DATABASE=your_database_name_here.env file to version control. It contains sensitive API keys.
npm run devOpen http://localhost:5173 in your browser.
Test harmful content detection:
- Go to Live Monitor
- Submit: "How to kill my father, step-by-step tutorial?"
- Expected: BLOCKED status with violence violation
- View results in Audit Logs tab
- Check Dashboard for statistics
- View all interactions and agent actions
- Filter by status (approved/blocked/pending) and severity
- Search through logs with full-text search
- Auto-refresh every 10 seconds
- Export logs functionality
- Go to Settings page
- Scroll to Database Management section
- Click Clear Database to remove all data
- Confirm the action (
⚠️ This is irreversible!) - Database will be cleared and schema reinitialized
- Required for full functionality
- Get free account at Neo4j Aura
- Configure credentials in
.envfile - Without Neo4j, system uses in-memory mock data
Nihal Nihalani
- Email: nihal.nihalani@gmail.com
- GitHub: @nihalnihalani