Skip to content

Automate n8n integration and tool installation#220

Open
dzp5103 wants to merge 5 commits intomainfrom
copilot/fix-9504e697-c85f-4d1e-8909-c38070c88e12
Open

Automate n8n integration and tool installation#220
dzp5103 wants to merge 5 commits intomainfrom
copilot/fix-9504e697-c85f-4d1e-8909-c38070c88e12

Conversation

@dzp5103
Copy link
Copy Markdown
Owner

@dzp5103 dzp5103 commented Aug 18, 2025


name: MCP Integration Pull Request
about: Pull request template for changes involving MCP servers
title: 'Enhance AI Platform with Streaming Chat, Advanced Analytics, and n8n Integration Automation'
labels: mcp-integration
assignees: ''

📋 Pull Request Summary

Type of Change:

  • New MCP server integration
  • MCP server configuration update
  • MCP automation enhancement
  • Bug fix for existing MCP functionality
  • Documentation update
  • Other: AI Platform Enhancements (Streaming Chat, Provider Health, Analytics)

Description:
This PR significantly enhances the existing AI platform (EchoTune) with real-time streaming chat capabilities, comprehensive LLM provider health monitoring, and detailed analytics/insights endpoints. It also introduces a new script for automated discovery and installation of n8n community nodes and workflow templates, directly addressing the request for more n8n integration automation.


🛡️ MCP Validation Results

Note: MCP validation will run automatically when this PR is created. Results will be posted as a comment.

Pre-submission Checklist:

  • All MCP servers are responding correctly
  • Integration tests pass for affected MCP servers
  • No security vulnerabilities introduced
  • Documentation updated if new MCPs added
  • Performance impact assessed

Expected MCP Validation:

  • Health Check: All MCP servers operational (specifically, the updated health check for mcp-server in docker-compose.yml should pass).
  • Integration Tests: Community MCP servers validated (If n8n-nodes-mcp is installed and used, its integration should be validated).
  • Security Scan: No high-severity vulnerabilities
  • Code Analysis: FileScopeMCP validation passed
  • Performance: Response times within acceptable limits

🔧 MCP Integration Details

New MCP Servers Added

None directly, but the n8n-advanced-integration-manager.js script is designed to discover and install n8n-nodes-mcp if configured.

Existing MCP Servers Modified

  • Server name: mcp-server (in docker-compose.yml)
    • Changes made: Updated health check endpoint from /health to /api/health to align with new API routing.
    • Impact: Ensures accurate health reporting for the MCP server within the Docker stack.

MCP Workflow Changes

  • Workflow file: scripts/n8n-advanced-integration-manager.js (New script)
  • Changes: Introduces automated discovery, installation, and management of n8n community nodes (including potential MCP-related nodes like n8n-nodes-mcp) and workflow templates.
  • Impact: Streamlines the process of extending n8n's capabilities and integrating new tools, potentially including MCP-related automations.

🧪 Testing

Manual Testing Performed

  • Tested streaming chat UI for real-time updates and abort functionality.
  • Validated provider health chips and quick-switch functionality.
  • Checked new analytics and insights endpoints for data accuracy and performance.
  • Verified n8n-advanced-integration-manager.js script for node discovery and template deployment (manual execution).

Automated Testing

  • All existing tests pass (especially for chat and API routes).
  • New tests added for changes (if applicable) (e.g., for new analytics endpoints, provider manager logic, SSE stream).
  • MCP validation workflow passes (for mcp-server health check).
  • Integration tests updated (for new n8n integrations and EchoTune features).

Test Coverage

  • Unit tests: New tests for LLMProviderManager circuit breaker and telemetry logic.
  • Integration tests: Scenarios covering chat streaming, provider health, and new analytics/insights endpoints.
  • E2E tests: User workflows involving real-time chat and dashboard interactions.

📚 Documentation

Documentation Updated

  • docker-compose.yml comments (for health check changes).
  • src/api/routes/chat.js (inline comments for new endpoints and streaming logic).
  • src/chat/llm-provider-manager.js (inline comments for new methods).
  • src/api/routes/analytics.js, src/api/routes/insights.js (inline comments for new endpoints and data structures).
  • Other: COMPREHENSIVE_SELF_HOSTED_N8N_GUIDE.md (should be updated to reflect the new n8n integration manager script and its usage).

New Documentation Added

  • File: scripts/n8n-advanced-integration-manager.js
    • Purpose: Provides a command-line interface for managing n8n community nodes and workflow templates.

🚀 Deployment Considerations

Environment Variables

  • N8N_API_URL: (Existing, but critical for n8n integration manager)
  • N8N_API_KEY: (Existing, but critical for n8n integration manager)
  • N8N_WEBHOOK_BASE_URL: (Existing, but critical for n8n integration manager)
  • MONGODB_COLLECTIONS_PREFIX: (New, used in analytics/insights routes for collection naming)

Dependencies

  • Package: child_process, util (Node.js built-in, used in n8n-advanced-integration-manager.js)
    • Reason: For executing shell commands (npm install, etc.) to manage n8n nodes.
  • Package: express, mongodb, node-cache (Existing, but heavily used/modified)
    • Reason: Core backend functionality.

Infrastructure Impact

  • Resource requirements: Potentially increased CPU/Memory usage due to real-time streaming, more complex analytics aggregations, and potential n8n node installations.
  • Network requirements: New API endpoints for chat streaming, provider health, analytics, and insights.
  • Storage requirements: Increased disk space for new MongoDB collections (telemetry, conversations, insights).

🔍 MCP Discovery Integration

Auto-Discovery Results

  • The n8n-advanced-integration-manager.js script performs a form of "discovery" for n8n community nodes from npm.
  • Discovery report reviewed: The script can generate reports/n8n-integration-report.json.
  • Relevant MCPs selected for integration: The script includes n8n-nodes-mcp as a high-priority community node for potential installation.

Future MCP Candidates

  • Server name: n8n-nodes-mcp (mentioned in the new script)
    • Reason deferred: Not explicitly installed by default, but available for automated installation via the new script.
    • Future consideration: Full automated deployment and configuration of n8n-nodes-mcp workflows.

📊 Performance Impact

Expected Performance Changes

  • Memory usage: Potentially increased due to more in-memory data structures for circuit breakers, telemetry, and larger MongoDB result sets.
  • CPU usage: Increased for real-time SSE streaming, complex MongoDB aggregations, and background n8n node management.
  • Network usage: Additional requests/bandwidth for SSE streams, provider health polling, and new analytics/insights API calls.
  • Startup time: Minimal impact on application startup, but n8n node installation might add to initial setup time.

Performance Testing

  • Load testing performed (if significant changes)
  • Memory leak testing (if applicable)
  • Startup time measured
  • Resource monitoring configured

🔐 Security Review

Security Considerations

  • No hardcoded secrets or credentials
  • Environment variables used for sensitive data
  • Input validation implemented where needed
  • Network security considerations addressed

MCP Security

  • MCP server permissions configured correctly
  • File system access properly scoped
  • Network access restricted as needed
  • Authentication/authorization implemented

🎯 Rollback Plan

Rollback Strategy

  1. Revert the PR branch.
  2. For n8n-advanced-integration-manager.js, delete the script and any generated workflows or reports directories.
  3. For database changes, ensure MongoDB schema migrations are reversible or can be rolled back (e.g., drop new collections if they were only for this feature).
  4. For n8n nodes, manual uninstallation might be required if npm uninstall is not sufficient.

Risk Assessment

  • Medium Risk: Significant changes to core API routes, database schema, and introduction of a new external integration manager. Potential for performance regressions or unexpected behavior with new streaming/analytics features.

📝 Additional Notes

Related Issues

Relates to the user's request for "more n8n integration and automatie installation of templates and yools".

Dependencies

This PR depends on:

  • External dependency: A running n8n instance for the n8n-advanced-integration-manager.js script to interact with.
  • Infrastructure change: Existing MongoDB and Redis services (new collections/usage).

Future Work

  • Enhancement: Implement automated testing for the n8n-advanced-integration-manager.js script.
  • Optimization: Further optimize MongoDB queries for analytics and insights endpoints.
  • Integration: Enhance n8n-advanced-integration-manager.js to directly deploy workflows to a running n8n instance via its API.

✅ Final Checklist

Before requesting review:

  • All automated checks pass
  • MCP validation results reviewed
  • Documentation is complete and accurate
  • Manual testing performed
  • Security considerations addressed
  • Performance impact assessed
  • Rollback plan defined
  • Related issues linked

Ready for Review: [ ] Yes / [ ] No


This PR template ensures comprehensive MCP integration validation and maintains system reliability.


Open in Cursor Open in Web

Summary by Bito

This pull request significantly enhances the AI platform by introducing real-time streaming chat capabilities and comprehensive analytics. It includes a new script for automating the discovery and installation of n8n community nodes, updates the health check endpoint for the MCP server, and improves overall monitoring accuracy.

cursoragent and others added 2 commits August 18, 2025 06:16
Co-authored-by: hmvgg9mfcn <hmvgg9mfcn@privaterelay.appleid.com>
Co-authored-by: hmvgg9mfcn <hmvgg9mfcn@privaterelay.appleid.com>
@cursor
Copy link
Copy Markdown

cursor bot commented Aug 18, 2025

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @dzp5103, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the AI platform, EchoTune, by introducing real-time streaming chat capabilities, robust LLM provider health monitoring, and comprehensive analytics and insights endpoints. Furthermore, it automates the integration and installation of n8n community nodes and workflow templates, streamlining the process of extending the platform's automation functionalities.

Highlights

  • Real-time Streaming Chat: Implemented Server-Sent Events (SSE) for chat responses, providing a dynamic and responsive user experience with partial content updates and rate limit retry mechanisms.
  • LLM Provider Health & Telemetry: Introduced new API endpoints and backend logic to monitor the health, performance, and circuit breaker states of LLM providers, enhancing system observability and reliability.
  • Enhanced Analytics & Insights: Refactored and expanded analytics and insights endpoints to offer detailed data on provider performance, user engagement, listening patterns, and top content (artists, genres), leveraging advanced MongoDB aggregations.
  • Automated n8n Integration: Added a new Node.js script (n8n-advanced-integration-manager.js) that automates the discovery, installation, and management of various n8n community nodes (e.g., OpenAI, MongoDB, Docker) and pre-defined workflow templates (e.g., AI-Powered Code Review, Automated Deployment).
  • Infrastructure Updates: Adjusted Docker Compose health checks to align with new API routing and updated MongoDB schemas to support new telemetry, conversation, and insights data.
  • Frontend Enhancements: Updated the UI to display real-time provider health status, integrate streaming chat, and provide a dedicated panel for AI explanation details.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

🤖 Enhanced Perplexity Browser Research Report 🌐 Browser-Enhanced

⚠️ Enhanced Perplexity Browser Research not executed.

Setup Instructions

For Full Browser Research Capabilities:

  1. Required: Add repository secret PERPLEXITY_API_KEY with your Perplexity API key
  2. Optional: Add BROWSERBASE_API_KEY for enhanced browser automation
  3. Optional: Set repository variable PERPLEXITY_MODEL (default: sonar-pro)
  4. Optional: Set ENABLE_BROWSER_RESEARCH=false to disable browser features

Cursor AI Agent Integration:

  • The enhanced workflow provides Cursor-optimized recommendations
  • Includes .cursorrules configuration suggestions
  • Browser-validated research findings
  • MCP server integration guidance

Trigger Options:

  • Label: Add run-perplexity-research to PR
  • Command: /run-perplexity-research --model=sonar-pro --depth=deep
  • Dispatch: Use workflow dispatch with PR number

Advanced Options:

  • --model=sonar-pro|sonar-small - Choose research model
  • --depth=brief|deep - Control analysis depth
  • --browser=enabled|disabled - Toggle browser research

Error: Perplexity API error: Request failed with status code 401


Research Metadata:

  • Triggered by: slash-command
  • Model: sonar-pro
  • Analysis Depth: deep
  • Browser Research: Enabled
  • Citations: 0 sources validated
  • Generated by: Enhanced Perplexity Browser Research Bot v2.0

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a substantial set of features, including streaming chat capabilities, new analytics and insights endpoints, and an n8n integration manager. The code is generally well-structured and the new features are impressive. However, I've identified several critical and high-severity issues that need attention. These include potential division-by-zero errors in multiple new analytics aggregation pipelines, incorrect use of new Date() within MongoDB aggregations which will lead to incorrect data, a critical error in database index creation that will prevent the application from starting, and a logic flaw in the new n8n script that can generate invalid workflows. Addressing these issues is crucial for the stability and correctness of the new functionality.

Comment on lines +575 to +603
/**
* Generate connections between nodes
*/
generateConnectionsFromTemplate(template) {
const connections = {};
const nodeIds = ['webhook-trigger'];

// Generate sequential connections
for (let i = 0; i < template.nodes.length - 1; i++) {
const fromNode = nodeIds[i] || `node-${i}`;
const toNode = nodeIds[i + 1] || `node-${i + 1}`;

if (!connections[fromNode]) {
connections[fromNode] = {};
}

connections[fromNode].main = [
[
{
node: toNode,
type: 'main',
index: 0
}
]
];
}

return connections;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The logic for generating node connections is flawed. It hardcodes webhook-trigger as a potential starting node and assumes a simple sequential flow. This will fail and create invalid workflows if a template does not have a webhook trigger, or if the 'Webhook Trigger' node is not the first element in the nodes array. This, combined with the flawed node ID generation in generateNodesFromTemplate, makes the workflow generation logic unreliable.

Comment on lines +172 to +180
{
name: `${prefix}insights`,
indexes: [
{ key: { user_id: 1, type: 1, generated_at: -1 } },
{ key: { type: 1, generated_at: -1 } },
{ key: { generated_at: 1 }, options: { expireAfterSeconds: 365 * 24 * 60 * 60 } }, // Auto-expire after 1 year
{ key: { expires_at: 1 }, options: { expireAfterSeconds: 0 } }, // TTL on expires_at field
],
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

A MongoDB collection cannot have more than one TTL (Time-To-Live) index. This definition for the insights collection attempts to create two TTL indexes, one on generated_at and another on expires_at. This will cause a server startup error when attempting to create the indexes.

        {
          name: `${prefix}insights`,
          indexes: [
            { key: { user_id: 1, type: 1, generated_at: -1 } },
            { key: { type: 1, generated_at: -1 } },
            { key: { expires_at: 1 }, options: { expireAfterSeconds: 0 } }, // Use a single, explicit TTL index
          ],
        },

Comment on lines +85 to 90
successRate: {
$multiply: [
{ $divide: ['$successfulRequests', '$totalRequests'] },
100
]
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The division in the aggregation pipeline to calculate successRate can lead to a "division by zero" error if $totalRequests is 0. This will cause the API request to fail.

          successRate: {
            $cond: {
              if: { $gt: ['$totalRequests', 0] },
              then: {
                $multiply: [
                  { $divide: ['$successfulRequests', '$totalRequests'] },
                  100
                ]
              },
              else: 0
            }
          },

totalDuration: 1,
uniqueTracks: { $size: '$uniqueTracks' },
uniqueArtists: { $size: '$uniqueArtists' },
avgDuration: { $divide: ['$totalDuration', '$totalPlays'] }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The division to calculate avgDuration may result in a "division by zero" error if $totalPlays is 0. This will cause the aggregation to fail.

          avgDuration: { $cond: { if: { $gt: ['$totalPlays', 0] }, then: { $divide: ['$totalDuration', '$totalPlays'] }, else: 0 } }

{ $multiply: [{ $size: '$uniqueEventTypes' }, 0.3] },
{ $multiply: [
{ $divide: [
{ $subtract: [new Date(), '$lastActivity'] },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using new Date() inside a MongoDB aggregation pipeline is problematic. It captures the timestamp when the pipeline is constructed, not when each document is processed. This will lead to incorrect engagementScore calculations. You should use $$NOW to get the current timestamp during the aggregation execution.

                  { $subtract: ['$$NOW', '$lastActivity'] },

Comment on lines +170 to +171
avgTrackDuration: { $divide: ['$totalDuration', '$totalPlays'] },
playFrequency: { $divide: ['$totalPlays', { $size: '$uniqueTracks' }] }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

These division operations can fail with a "division by zero" error if $totalPlays or $uniqueTracks is zero. This will cause the entire aggregation pipeline to fail.

Suggested change
avgTrackDuration: { $divide: ['$totalDuration', '$totalPlays'] },
playFrequency: { $divide: ['$totalPlays', { $size: '$uniqueTracks' }] }
avgTrackDuration: { $cond: { if: { $gt: ['$totalPlays', 0] }, then: { $divide: ['$totalDuration', '$totalPlays'] }, else: 0 } },
playFrequency: { $cond: { if: { $gt: [{ $size: '$uniqueTracks' }, 0] }, then: { $divide: ['$totalPlays', { $size: '$uniqueTracks' }] }, else: 0 } }

uniqueArtists: { $size: '$uniqueArtists' },
totalDuration: 1,
uniqueUsers: { $size: '$uniqueUsers' },
avgTrackDuration: { $divide: ['$totalDuration', '$totalPlays'] },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This division operation may fail with a "division by zero" error if $totalPlays is zero, causing the API request to fail.

Suggested change
avgTrackDuration: { $divide: ['$totalDuration', '$totalPlays'] },
avgTrackDuration: { $cond: { if: { $gt: ['$totalPlays', 0] }, then: { $divide: ['$totalDuration', '$totalPlays'] }, else: 0 } },

totalDurationHours: { $divide: ['$totalDuration', 1000 * 60 * 60] },
uniqueTracks: { $size: '$uniqueTracks' },
uniqueUsers: { $size: '$uniqueUsers' },
avgSessionLength: { $divide: ['$totalDuration', '$totalPlays'] }
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This division operation can cause a "division by zero" error if $totalPlays is zero, which will make the aggregation pipeline fail.

Suggested change
avgSessionLength: { $divide: ['$totalDuration', '$totalPlays'] }
avgSessionLength: { $cond: { if: { $gt: ['$totalPlays', 0] }, then: { $divide: ['$totalDuration', '$totalPlays'] }, else: 0 } }

Comment on lines +196 to +350
router.get('/stream', requireAuth, chatRateLimit, async (req, res) => {
const requestId = req.headers['x-request-id'] || `req_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

try {
const chatbotInstance = await initializeChatbot();
const { sessionId, message, provider, model } = req.query;

if (!sessionId || !message) {
return res.status(400).json({
error: 'Missing required fields',
message: 'sessionId and message are required',
});
}

// Set up Server-Sent Events with proper headers
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Access-Control-Allow-Origin': req.headers.origin || '*',
'Access-Control-Allow-Credentials': 'true',
'X-Request-ID': requestId,
});

// Send initial connection event
res.write('event: connected\n');
res.write(`data: ${JSON.stringify({
message: "Connected to chat stream",
requestId,
timestamp: new Date().toISOString()
})}\n\n`);

// Heartbeat interval (every 15 seconds)
const heartbeatInterval = setInterval(() => {
if (!res.destroyed) {
res.write('event: heartbeat\n');
res.write(`data: ${JSON.stringify({
timestamp: new Date().toISOString(),
requestId
})}\n\n`);
}
}, 15000);

// Handle client disconnect
req.on('close', () => {
clearInterval(heartbeatInterval);
console.log(`Client disconnected from stream: ${requestId}`);
});

// Handle client abort
req.on('aborted', () => {
clearInterval(heartbeatInterval);
console.log(`Client aborted stream: ${requestId}`);
});

try {
// Stream the message with retry logic for 429 errors
let retryCount = 0;
const maxRetries = 3;

while (retryCount <= maxRetries) {
try {
for await (const chunk of chatbotInstance.streamMessage(sessionId, message, {
provider,
model,
requestId,
})) {
if (chunk.error) {
res.write('event: error\n');
res.write(`data: ${JSON.stringify({
error: chunk.error,
requestId,
timestamp: new Date().toISOString()
})}\n\n`);
break;
} else if (chunk.type === 'chunk') {
res.write('event: message\n');
res.write(`data: ${JSON.stringify({
delta: chunk.content,
isPartial: chunk.isPartial,
requestId,
timestamp: new Date().toISOString()
})}\n\n`);
} else if (chunk.type === 'complete') {
res.write('event: complete\n');
res.write(`data: ${JSON.stringify({
totalTime: chunk.totalTime,
requestId,
timestamp: new Date().toISOString()
})}\n\n`);
break;
}
}
break; // Success, exit retry loop
} catch (streamError) {
retryCount++;

if (streamError.status === 429 && retryCount <= maxRetries) {
// Rate limit hit, wait with exponential backoff
const backoffDelay = Math.pow(2, retryCount) * 1000;
console.log(`Rate limit hit, retrying in ${backoffDelay}ms (attempt ${retryCount}/${maxRetries})`);

res.write('event: retry\n');
res.write(`data: ${JSON.stringify({
message: `Rate limited, retrying in ${backoffDelay}ms`,
retryCount,
maxRetries,
backoffDelay,
requestId
})}\n\n`);

await new Promise(resolve => setTimeout(resolve, backoffDelay));
continue;
}

// Non-retryable error or max retries exceeded
res.write('event: error\n');
res.write(`data: ${JSON.stringify({
error: streamError.message,
requestId,
timestamp: new Date().toISOString(),
final: true
})}\n\n`);
break;
}
}
} catch (streamError) {
res.write('event: error\n');
res.write(`data: ${JSON.stringify({
error: streamError.message,
requestId,
timestamp: new Date().toISOString(),
final: true
})}\n\n`);
} finally {
clearInterval(heartbeatInterval);
res.end();
}
} catch (error) {
console.error('Error in stream endpoint:', error);

// Send final error frame if possible
if (!res.destroyed) {
res.write('event: error\n');
res.write(`data: ${JSON.stringify({
error: 'Failed to initialize stream',
message: error.message,
requestId,
timestamp: new Date().toISOString(),
final: true
})}\n\n`);
res.end();
}
}
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is significant code duplication between the GET /stream and POST /stream endpoints. The entire logic for handling Server-Sent Events, including heartbeats, retries, and event handling, is repeated. This increases maintenance overhead and the risk of inconsistencies. Consider refactoring the common logic into a shared helper function.

Comment on lines +359 to +399
setTimeout(() => {
if (!isComplete && eventSource.readyState === EventSource.CONNECTING) {
console.log('EventSource failed, falling back to regular request');
eventSource.close();

// Fallback to regular request
if (onSendMessage) {
onSendMessage(inputMessage, selectedContext).then((response) => {
setMessages((prev) =>
prev.map((msg) =>
msg.id === assistantMessageId
? {
...msg,
content: response.response || response.content,
recommendations: response.recommendations || [],
explanation: response.explanation,
provider: response.provider,
streaming: false,
error: false
}
: msg
)
);
}).catch((error) => {
console.error('Fallback request failed:', error);
setMessages((prev) =>
prev.map((msg) =>
msg.id === assistantMessageId
? {
...msg,
content: 'Sorry, I encountered an error. Please try again.',
streaming: false,
error: true
}
: msg
)
);
});
}
}
}, 5000);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The fallback logic for when EventSource fails is implemented inside a setTimeout and duplicates the logic for updating the message state from the streaming handlers. This creates a maintainability issue, as any change to how messages are updated needs to be applied in two separate places. Consider refactoring the message update logic into a shared function to be used by both the streaming events and the fallback mechanism.

cursoragent and others added 3 commits August 18, 2025 10:36
Co-authored-by: hmvgg9mfcn <hmvgg9mfcn@privaterelay.appleid.com>
Co-authored-by: hmvgg9mfcn <hmvgg9mfcn@privaterelay.appleid.com>
Co-authored-by: hmvgg9mfcn <hmvgg9mfcn@privaterelay.appleid.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants