LiteRT-LM
Run Gemma 4 and other LLMs fully on-device — no cloud required
LiteRT-LM is Google's production-grade, open-source inference framework for deploying Large Language Models on edge devices — phones, IoT hardware, Raspberry Pi, and desktop machines without cloud connectivity. Launched April 7, 2026 alongside Gemma 4 support, it enables developers to run Gemma, Llama, Phi-4, Qwen, and other models entirely locally via a simple CLI or embedded SDK. The framework handles the hard parts of edge inference: memory-mapped per-layer embeddings, 2-bit and 4-bit quantization, NPU acceleration for Qualcomm and MediaTek chipsets (early access), and cross-platform support spanning Android, iOS, Web, and desktop. Gemma 4's E2B variant runs under 1.5GB RAM on some devices, making full LLM functionality viable on mid-range hardware. What makes LiteRT-LM significant is the agentic angle. It's one of the first frameworks to support multi-step agentic workflows running completely on-device — function calling, tool use, vision and audio inputs — without a single network request. For developers building privacy-sensitive apps or offline-capable agents, this changes the calculus entirely.
Panel Reviews
The Builder
Developer Perspective
“This is the real deal for edge AI development. The CLI makes it trivial to get Gemma 4 running locally in minutes, and function calling support means you can build actual agentic apps that work offline. Google backing means this won't be abandoned in six months.”
The Skeptic
Reality Check
“NPU acceleration is still early access and the model selection is Google-heavy. Developers building with Llama or Mistral have Ollama and llama.cpp with far more mature ecosystems. LiteRT-LM needs a year of community baking before it rivals those alternatives.”
The Futurist
Big Picture
“On-device agentic AI is the privacy-preserving future of personal computing. LiteRT-LM gives Google a strong position in edge inference infrastructure — expect this to become the default runtime for Android AI features within 18 months.”
The Creator
Content & Design
“The vision and audio input support unlocks real creative tools that work on a plane or in a studio without WiFi. Running a multimodal model locally with no usage fees means I can experiment with AI-assisted workflows without watching a billing meter.”
Community Sentiment
“Impressed by sub-1.5GB RAM for Gemma 4 E2B”
“Compared against llama.cpp and Ollama”
“On-device agentic workflows with tool use”