Skip to content

Releases: zetic-ai/ZeticMLangeiOS

1.5.14

07 Apr 11:11

Choose a tag to compare

Updated

  • Fix a bug failing to load a big-sized model when loaded from the local cache.

1.5.13

06 Apr 10:20

Choose a tag to compare

Updated

  • Added Lite tier support
  • Improved Gemma 4 support for LLM
  • Fixed an issue preventing LLM models from running in some builds

1.5.12

06 Apr 07:52

Choose a tag to compare

Updated

  • Add Lite tier in the enum
  • Update LLM for support gemma4

Known Issues

  • LLM initialization may fail in some distributed iOS builds, preventing chat or model execution.
  • If affected, update to the latest release.

1.5.11

24 Mar 08:58

Choose a tag to compare

Updated

As the same as 1.5.10

1.5.10

24 Mar 08:19

Choose a tag to compare

Updated

  • Update ZeticMLangeLLMModel API
    • Support selecting apType (.CPU / .GPU) together with quantType when target is .LLAMA_CPP
    • Add initOption to configure LLM initialization options
    • Move kvCacheCleanupPolicy from the initializer arguments into initOption
    • Support configuring context length with initOption.nCtx

TBD

  • Update docs about the new ZeticMLangeLLMModel API

1.5.9

17 Mar 11:19

Choose a tag to compare

Updated

  • Add cacheHandlingPolicy and ModelCacheManager.
  • Update default modelMode in ZeticMLangeLLLMModel.

TBD

  • Update docs about cache handling

1.5.8

20 Feb 08:28

Choose a tag to compare

Release version 1.5.8

1.5.7

15 Feb 15:06

Choose a tag to compare

Full Changelog: 1.5.6...1.5.7

1.5.6

06 Feb 08:41

Choose a tag to compare

Full Changelog: 1.5.5...1.5.6

1.5.5

05 Feb 10:47

Choose a tag to compare

Full Changelog: 1.5.4...1.5.5