Skip to content

[FEATURE] Add generation controls to the backend API (temperature, section_length, deterministic_mode) #18

@EQuBitC18

Description

@EQuBitC18

Feature Summary

Expose generation controls through the backend API so callers can tune model behavior instead of relying on hardcoded defaults.

Problem/Use Case

Milestone 1 explicitly calls for user customization, but the current backend generation pipeline hardcodes key generation behavior:

temperature=0.7
max_tokens=1024
MAX_CONTEXT_LENGTH = 500

The generation endpoints do not currently accept request-level settings for temperature, section length, or deterministic generation.

Proposed Solution

Add validated optional request fields to the generation flow, for example:

temperature
section_length
deterministic_mode
Then wire those values through add_microcourse and generate_next_section into generate_microcourse_section and call_llm.

For deterministic mode, implement best-effort stability rather than claiming perfect determinism:

lower or zero temperature
stable prompt construction
optional disabling of noisy/recent web search context

Alternative Solutions

No response

Priority

Nice to have

Feature Scope

AI/ML (Content Generation)

Additional Context

No response

Checklist

  • This feature would benefit other users, not just me
  • I have searched for similar feature requests
  • I am willing to help implement this feature

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions