-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Feature Summary
Expose generation controls through the backend API so callers can tune model behavior instead of relying on hardcoded defaults.
Problem/Use Case
Milestone 1 explicitly calls for user customization, but the current backend generation pipeline hardcodes key generation behavior:
temperature=0.7
max_tokens=1024
MAX_CONTEXT_LENGTH = 500
The generation endpoints do not currently accept request-level settings for temperature, section length, or deterministic generation.
Proposed Solution
Add validated optional request fields to the generation flow, for example:
temperature
section_length
deterministic_mode
Then wire those values through add_microcourse and generate_next_section into generate_microcourse_section and call_llm.
For deterministic mode, implement best-effort stability rather than claiming perfect determinism:
lower or zero temperature
stable prompt construction
optional disabling of noisy/recent web search context
Alternative Solutions
No response
Priority
Nice to have
Feature Scope
AI/ML (Content Generation)
Additional Context
No response
Checklist
- This feature would benefit other users, not just me
- I have searched for similar feature requests
- I am willing to help implement this feature