Skip to content

feat: wire AI chat to LLM with full LEGO multi-track generation#851

Closed
fspecii wants to merge 1 commit intoace-step:mainfrom
fspecii:main
Closed

feat: wire AI chat to LLM with full LEGO multi-track generation#851
fspecii wants to merge 1 commit intoace-step:mainfrom
fspecii:main

Conversation

@fspecii
Copy link
Copy Markdown

@fspecii fspecii commented Mar 24, 2026

This adds a real LLM backend to the AI chat panel and wires it to the full LEGO multi-track generation pipeline.

What was added

A new llmChatService supports any OpenAI compatible provider (OpenRouter, OpenAI, DeepSeek, etc). When the user asks for a song the LLM responds with a structured action block that triggers generateBatch to create all tracks in parallel using a shared seed and global caption for musical coherence.

The service includes a genre aware system prompt with detailed production DNA for trap, boom bap, lo-fi, pop, house, R&B, jazz, rock and afrobeats so the model generates genre accurate tags instead of generic ones.

A crypto.randomUUID polyfill was added to index.html for compatibility with older browsers.

Tested with OpenRouter using the Gemini Flash model and works fine on my tests.

Add llmChatService with OpenRouter integration and LEGO pipeline support.
Remove hardcoded API key. Add crypto.randomUUID polyfill. Wire uiStore
to stream from real LLM with fallback to built-in assistant.
@ChuxiJ
Copy link
Copy Markdown

ChuxiJ commented Mar 26, 2026

Review

Thanks for the contribution! The LEGO multi-track generation pipeline via LLM is a great idea. However, this PR needs some work before it can be merged:

Must Fix

  1. No tests — 431 lines of new service code needs unit tests (at minimum: parseActions, buildSystemPrompt, action execution)
  2. No CI ran — please ensure CI checks pass (type-check, unit-test, build)
  3. Hardcoded modelconst MODEL = 'google/gemini-2.0-flash-lite-001' should be user-configurable via the settings/provider UI
  4. API key in localStorage — XSS-accessible; consider using a more secure storage approach or at minimum document the risk
  5. crypto.randomUUID polyfill in index.html — not needed for our browser support targets (Chrome 92+, Firefox 95+, Safari 15.4+). Remove or move to a proper polyfill module if needed.
  6. Side effects in main.tsx — localStorage writes before React mount; move to a lazy initializer or store init

Should Fix

  1. window.__dawSummary — no TypeScript declaration, will cause type errors
  2. setTimeout(150) for state settling — fragile; use a proper store subscription or await pattern
  3. Genre system prompt — consider moving to a separate data file for maintainability
  4. __FALLBACK__ sentinel string — use a proper return type instead of magic strings

Please address items 1-6 and push an updated branch. Happy to re-review!

Copy link
Copy Markdown

ChuxiJ commented Mar 29, 2026

Hi @fspecii — thanks for taking the time to put this together! We appreciate the effort.

That said, ACE-Step DAW is currently in an intensive solo-development phase with very frequent updates landing daily, which makes it difficult to keep external PRs in sync and properly integrated.

The LLM chat + multi-track generation feature you've built here is definitely on our roadmap — we plan to have it implemented end-to-end by our agent-driven development pipeline (Claude Code handles all implementation work on this project).

For these reasons we're going to close this PR for now. We're not accepting external contributions at this stage, but this may change as the project matures. Thanks again for your interest in the project! 🙏


Generated by Claude Code

@ChuxiJ ChuxiJ closed this Mar 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants