Skip to content

[GENERAL] Test Math & Code Generation Behavior of GPT-4.5 and Ollama #20

@EQuBitC18

Description

@EQuBitC18

Issue Type

Performance Issue

Description

It would be informative to know how the current project produces Math & Code outputs. Some tests were conducted just to see basic functionality working (like LaTex formatting, Code Snippets shown in a special field with "Copy" function), but other than that, there has not been done any other deep tests, especially consistency tests.

So a thorough report would be nice to get an in-depth insight of how Math & Code Generation works and how to make it more consistent at every generation.

Steps you can follow (but don't have to, this is just a little push to the relevant direction. Feel free to experiment with the generator!):

  1. Create a microcourse that generates code examples or math expressions.
  2. Fetch the resulting microcourse or section through the API.
  3. Inspect the code_examples and math_expressions fields.

Additional Context

No response

Checklist

  • I have searched existing issues and discussions
  • This is not a security vulnerability (see SECURITY.md)

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationgood first issueGood for newcomershelp wantedExtra attention is neededquestionFurther information is requested

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions