Showcase / question: a board-proven offline language runtime on ESP32-C3, and whether some local language capability may eventually move beyond quantized local LLMs #1018
Alpha-Guardian
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi MLX LM folks,
I wanted to share a small but unusual language-runtime project that may still be relevant to the broader question of how local language capability is converted, quantized, packaged, and deployed, even though it sits far outside the usual Apple Silicon path.
We built a public demo line called Engram and deployed it on a commodity ESP32-C3.
Current public numbers:
Host-side benchmark capability
LogiQA = 0.392523IFEval = 0.780037Published board proof
LogiQA 642 = 249 / 642 = 0.3878504672897196host_full_match = 642 / 6421,380,771 bytesImportant scope note:
This is not presented as unrestricted open-input native LLM generation on MCU.
The board-side path is closer to a flash-resident, table-driven runtime with:
So this is not a standard quantized local LLM running in a familiar local inference loop. It is closer to a task-specialized language runtime whose behavior has been crystallized into a compact executable form under severe physical constraints.
Repo:
https://github.com/Alpha-Guardian/Engram
Why I’m posting here is that MLX LM is one of the clearest public examples of how local language models are turned into quantized, usable, distributable runtime experiences.
What I’d be curious about is whether systems like this should be thought of as:
If this direction is relevant to your team, I’d be glad to compare notes.
Beta Was this translation helpful? Give feedback.
All reactions