This project explores running local language models entirely offline on Android devices, using open-source tools like llama.cpp, UserLAnd, and various Linux distributions (Arch, Alpine, Ubuntu, Kali) installed in a rooted/non-rooted environment.
The goal: turn old or underused Android devices into capable AI inference machines with complete user control and no cloud dependency.
- Install and run local LLMs (e.g., TinyLLaMA) on Android using
UserLAnd - Compile and use
llama.cppon-device (OnePlus 3T, 6GB RAM ) - Scripts for installing dependencies and launching models
- Architecture for reproducible experiments and logging
- Android phone (6GB RAM or more recommended)
- UserLAnd app installed from Play Store
- Working session using Arch Linux (or other supported distro)
- 3β5GB free storage for model + tools
offline-ai-android/
βββ scripts/ # install and run helpers
βββ docs/ # setup logs and markdown walkthroughs
βββ userland-setup/ # Linux distro notes
βββ experiments/ # test runs, logs, model benchmarks
βββ models/ # tested gguf models
βββ assets/ # screenshots, diagrams