A Fastify-based internal API service that transforms chat prompts and project summaries into LLM-powered insights using DeepSeek or OpenAI models.
Sentinel Agent is a lightweight microservice that:
- Fetches project summaries from an upstream Sentinel backend
- Processes user queries through LLM providers (DeepSeek or OpenAI)
- Returns AI-generated insights either as complete responses or real-time streams
- Enforces internal authentication via passphrase middleware
- 🚀 Fast HTTP Server: Built on Fastify v5 with structured logging
- 🔐 Internal Authentication: Passphrase-based middleware protection
- 🤖 Multi-Provider LLM: Support for DeepSeek and OpenAI models
- 📡 Streaming Support: Server-Sent Events for real-time token streaming
- 💬 Conversation History: Maintains chat context across interactions
- 📊 Weekly Data Analysis: Specialized prompts for analyzing aggregated event data
sentinel-agent/
├── server.js # Fastify server initialization
├── router.js # API route definitions
├── services/
│ └── service.js # LLM integration and streaming logic
├── middlewares/
│ └── middleware.js # Authentication middleware
├── constants/
│ └── constant.js # LLM configs, prompts, and constants
└── package.json # Dependencies and scripts
- Node.js v18+ (for native fetch and ES modules)
- npm for package management
npm installCreate a .env file in the project root:
PORT=5001
SERVER_BASE_URL=https://your-sentinel-backend.com
SENTINEL_INTERNAL_PASS=your-secure-passphrase
DEEPSEEK_KEY=your-deepseek-api-key
OPEN_AI_KEY=your-openai-api-key| Variable | Required | Description |
|---|---|---|
PORT |
No | Server port (default: 5001) |
SERVER_BASE_URL |
Yes | Base URL of Sentinel backend for fetching summaries |
SENTINEL_INTERNAL_PASS |
Yes | Shared secret for internal route authentication |
DEEPSEEK_KEY |
Yes | DeepSeek API key (default provider) |
OPEN_AI_KEY |
Conditional | OpenAI API key (required when provider=openai) |
Development mode (with auto-reload):
npm run devProduction mode:
npm startThe server binds to 0.0.0.0 for container/cloud compatibility.
Health check endpoint for monitoring.
Response: "OK"
Generate a complete AI response based on project summaries and user query.
Headers:
passphrase: Must matchSENTINEL_INTERNAL_PASSContent-Type:application/json
Query Parameters:
query(required): User's question or promptuserId(required): Sentinel user identifierprojectId(required): Project ID to fetch summaries forprovider(optional):openaiordeepseek(default:deepseek)
Request Body:
{
"history": [
{ "role": "user", "content": "Previous question" },
{ "role": "assistant", "content": "Previous answer" }
]
}Response:
{
"data": "AI-generated response based on project summaries"
}Example:
curl -X POST "http://localhost:5001/chat?projectId=abc123&userId=user456&query=What%20are%20the%20trends%20this%20week" \
-H "passphrase: your-passphrase" \
-H "Content-Type: application/json" \
-d '{"history":[]}'Stream AI responses in real-time using Server-Sent Events.
Parameters: Same as /chat
Response Headers:
Content-Type: text/event-streamCache-Control: no-cacheConnection: keep-alive
Response Format:
data: "First"
data: " chunk"
data: " of"
data: " response"
Example:
curl -N -X POST "http://localhost:5001/stream/chat?projectId=abc123&userId=user456&query=Summarize%20weekly%20data" \
-H "passphrase: your-passphrase" \
-H "Content-Type: application/json" \
-d '{"history":[]}'| Provider | Model | Configuration |
|---|---|---|
| DeepSeek (default) | deepseek-chat (V3) |
Uses DEEPSEEK_KEY and DeepSeek API base URL |
| OpenAI | gpt-4o |
Uses OPEN_AI_KEY and standard OpenAI endpoint |
- Creative (0.7): Default for open-ended analysis
- Precise (0.1): For deterministic responses
- Balanced (0.5): Mix of creativity and precision
The agent is configured as an AI assistant specializing in weekly event data analysis, focusing on:
- Identifying trends and patterns using
aggregated_at_strfields - Providing actionable insights from weekly metrics
- Maintaining a conversational yet professional tone
- Using Markdown formatting in responses
Hot Reload:
npm run devLogging: Fastify's built-in logger outputs structured JSON. Pipe through pino-pretty for readable logs:
npm run dev | npx pino-prettyError Handling: Both sync and streaming endpoints gracefully handle:
- Upstream API failures
- LLM provider errors
- Missing authentication
- Invalid parameters
Client Request
↓
[Passphrase Middleware]
↓
[Fetch Project Summaries from Sentinel Backend]
↓
[Build LLM Context with History + Summaries]
↓
[LLM Provider (DeepSeek/OpenAI)]
↓
[Return Complete Response OR Stream Chunks]
- fastify: Fast and low-overhead web framework
- @langchain/openai: LangChain integration for OpenAI-compatible APIs
- dotenv: Environment variable management
This is an internal service. For questions or contributions, contact the Sentinel platform team.
