Back to reviews
LiteRT-LM

LiteRT-LM

Google's open-source engine for LLMs on phones, browsers & IoT

LiteRT-LM is Google AI Edge's production-grade open-source inference framework for running large language models directly on edge devices — Android phones, iPhones, web browsers via WebAssembly, and IoT hardware. It powers the on-device GenAI features in Chrome, Chromebook Plus, and Pixel Watch that Google launched alongside Gemma 4. The framework supports a wide model zoo including Gemma, Llama, Phi-4, and Qwen, with quantization pipelines that fit models onto hardware as constrained as a wearable. It also supports function calling and tool use, enabling lightweight agentic workflows without a cloud round-trip. A JavaScript API makes browser integration straightforward for web developers. LiteRT-LM represents Google's answer to Apple Intelligence's on-device approach — an open, cross-platform runtime rather than a proprietary stack. The fact that it's open-sourced means any developer can ship private, offline AI features without touching Google's servers, which matters enormously for healthcare, finance, and enterprise applications.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

A unified inference runtime across Android, iOS, browser, and IoT with function calling support is exactly what the edge AI ecosystem has been missing. The WebAssembly path alone opens up private on-device AI in any browser without installing anything. Ship this immediately.

The Skeptic

The Skeptic

Reality Check

Skip

Edge inference is still severely constrained — even quantized Gemma 3B on a phone gives you a noticeably worse experience than cloud APIs. Google's history with edge AI frameworks is also mixed: TensorFlow Lite, ML Kit, MediaPipe all launched with fanfare and then got inconsistent maintenance.

The Futurist

The Futurist

Big Picture

Ship

This is infrastructure for the next decade. When models run on-device with no latency and no data leaving the device, entirely new categories of ambient, private AI become possible. LiteRT-LM is the missing runtime layer for that world — and Google open-sourcing it means the ecosystem builds around it rather than around Apple.

The Creator

The Creator

Content & Design

Ship

Offline AI for creative apps is a game-changer — imagine Procreate or Figma with on-device generative features that work on a plane. The browser WebAssembly support means I can prototype these ideas without an app store or backend. Very excited about the creative possibilities here.

Community Sentiment

Overall770 mentions
67% positive23% neutral10% negative
Hacker News180 mentions

Comparison to Apple's Core ML and whether Google will maintain it long-term

Reddit240 mentions

WebAssembly path for browser-based local AI and privacy implications

Twitter/X350 mentions

Function calling on-device enabling agentic workflows offline