Skip to content

CypherXXXX/nexusai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

NexusAI

AI-Powered Sales Intelligence Platform

Multi-agent research pipeline • Company knowledge graphs • Hybrid lead scoring • Opportunity detection

FastAPI Next.js LangGraph Groq Redis


System Architecture

graph LR
    A[User Input] --> B[Lead Ingestion API]
    B --> C[Background Job Queue]
    C --> D[Agent Planning]
    D --> E[Parallel Research Workers]
    E --> F{Data Sufficient?}
    F -->|No| E
    F -->|Yes| G[Company Enrichment]
    G --> H[Knowledge Graph Builder]
    H --> I[Hybrid Lead Scoring]
    I --> J[Opportunity Detector]
    J --> K[Outreach Generator]
    K --> L[Frontend Dashboard]

    style A fill:#7c3aed,stroke:#7c3aed,color:#fff
    style C fill:#dc382d,stroke:#dc382d,color:#fff
    style D fill:#0891b2,stroke:#0891b2,color:#fff
    style H fill:#059669,stroke:#059669,color:#fff
    style I fill:#d97706,stroke:#d97706,color:#fff
    style L fill:#7c3aed,stroke:#7c3aed,color:#fff
Loading

Pipeline Workflow

stateDiagram-v2
    [*] --> Plan: Lead submitted
    Plan --> Research: Generate research plan
    Research --> Evaluate: Parallel data gathering
    Evaluate --> Research: Insufficient data (retry)
    Evaluate --> Structure: Data complete
    Structure --> Score: Build company profile
    Score --> Detect: Hybrid scoring
    Detect --> Outreach: Opportunity signals
    Outreach --> [*]: Grounded email draft
Loading

Key Features

Feature Description
Planning Agent LLM generates research plan before execution
Parallel Research 6 concurrent research workers via asyncio.gather
Knowledge Graph Persistent entity-relationship graph per company
Hybrid Scoring 5 deterministic rules (0-20 each) + LLM reasoning
Opportunity Detection Funding, hiring, tech migration, product launch signals
Grounded Outreach Every claim maps to a verified research source
Background Jobs Async queue with real-time SSE progress streaming
Redis Caching 6-hour TTL on research data to reduce API calls
Rate Limiting slowapi: 50 req/min/IP, 10 research jobs/min
Structured Logging JSON telemetry with pipeline stage timing

Tech Stack

Backend: FastAPI, LangGraph, Groq (Llama 3.3 70B), SQLAlchemy, Redis, SSE

Frontend: Next.js 15, Tailwind CSS, Shadcn UI, Framer Motion, Recharts, SWR, Clerk

Infrastructure: PostgreSQL/SQLite, Redis (caching + job queue), slowapi rate limiting

Quick Start

Backend

cd backend
python -m venv venv && venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env  # Fill in API keys
python -m uvicorn main:app --reload --port 8000

Frontend

cd frontend
npm install
cp .env.example .env.local  # Fill in keys
npm run dev

Environment Variables

Variable Required Description
GROQ_API_KEY Yes Groq API key
DATABASE_URL No PostgreSQL URL (defaults to SQLite)
REDIS_URL No Redis URL (defaults to localhost:6379)
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY Yes Clerk auth key

API Endpoints

All routes prefixed with /api/v1/

Method Route Description
POST /leads Create lead
POST /leads/{id}/process Enqueue research job
GET /companies List companies with search
GET /companies/{id} Full intelligence profile
GET /companies/{id}/graph Company knowledge graph
GET /graph Global knowledge graph
GET /jobs/{id}/stream SSE progress stream
GET /analytics/pipeline-overview Pipeline funnel data
GET /analytics/opportunities All opportunity signals

Project Structure

nexusai/
├── backend/
│   ├── main.py                    FastAPI app with workers
│   ├── config/settings.py         Environment configuration
│   ├── api/routes/                7 versioned route modules
│   ├── src/
│   │   ├── database/              SQLAlchemy models + operations
│   │   ├── graph/                 LangGraph agent pipeline
│   │   ├── infrastructure/        Redis, queue, telemetry
│   │   ├── services/              6 business logic services
│   │   ├── llm/                   LLM provider abstraction
│   │   ├── models/                Pydantic + SQLAlchemy models
│   │   └── tools/                 Search, scraping, email
│   └── requirements.txt
├── frontend/
│   ├── src/
│   │   ├── app/dashboard/         Pages (overview, leads, companies, graph)
│   │   ├── components/            UI + dashboard components
│   │   ├── hooks/                 SWR data fetching hooks
│   │   └── lib/                   API client, utilities
│   └── package.json
└── README.md

License

MIT

About

Autonomous AI Sales Engineer. Researches 1000s of leads, scores them with verified data, and drafts hyper-personalized outreach—automatically.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors