Skip to content

Related project: AI agent on Galaxy Watch with 2.8 MB binary #648

@ThinkOffApp

Description

@ThinkOffApp

Your work on fast multimodal LLM inference on mobile devices is relevant to what we're doing with ClawWatch (https://github.com/ThinkOffApp/ClawWatch).

ClawWatch runs an OpenClaw AI agent natively on Galaxy Watch. The on-device runtime is NullClaw, a 2.8 MB static ARM binary written in Zig. Voice input uses Vosk for offline speech recognition. LLM inference currently goes through a network gateway rather than running on-device, which is the main limitation.

If mllm's inference engine could run small models directly on watch-class hardware (Exynos W930/W1000, 2 GB RAM), that would enable fully offline AI agents on wearables. The agent runtime and voice pipeline already work locally. The missing piece is local inference.

Would be interesting to know if anyone has tested mllm on Wear OS / watch-class ARM SoCs.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions