Showcase / question: a board-proven offline language runtime on ESP32-C3, and whether some language capability may eventually move beyond general dense model deployment #3272
Alpha-Guardian
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi MLX folks,
I wanted to share a small but unusual language-runtime project that may still be relevant to the broader question of how language capability is represented, trained, and deployed on-device, even though it sits far outside the usual Apple Silicon path.
We built a public demo line called Engram and deployed it on a commodity ESP32-C3.
Current public numbers:
Host-side benchmark capability
LogiQA = 0.392523IFEval = 0.780037Published board proof
LogiQA 642 = 249 / 642 = 0.3878504672897196host_full_match = 642 / 6421,380,771 bytesImportant scope note:
This is not presented as unrestricted open-input native LLM generation on MCU.
The board-side path is closer to a flash-resident, table-driven runtime with:
So this is not a standard dense model deployed to an efficient local inference runtime. It is closer to a task-specialized language runtime whose behavior has been crystallized into a compact executable form under severe physical constraints.
Repo:
https://github.com/Alpha-Guardian/Engram
Why I’m posting here is that MLX sits at an interesting point between language-model research, local implementation, and efficient deployment.
What I’d be curious about is whether systems like this should be thought of as:
artifact
If this direction is relevant to your team, I’d be glad to compare notes.
Beta Was this translation helpful? Give feedback.
All reactions