Back to reviews
Google AI Edge Gallery

Google AI Edge Gallery

Run Gemma 4 and other open models fully on-device — no cloud, no data sent

Google AI Edge Gallery is an Android and iOS app that lets users run open-source language models — including the newly released Gemma 4 family — entirely on-device with no internet required. It's essentially a showcase and sandbox for on-device ML, letting developers and power users benchmark models on their own hardware and explore capabilities without any data leaving the device. Version 1.0.11 shipped on April 2, 2026, adding support for Gemma 4 and on-device function calling. The app includes Prompt Lab for parameter testing, AI Chat with visible reasoning traces, image recognition, audio transcription, translation, and a small experimental offline game called Tiny Garden that uses natural language as input. The project has 16.6k stars and is fully open-source. With AICore integration landing in Android, Gemma 4 can run via the OS-level model runtime — meaning future apps can share a single on-device model instance rather than each bundling their own. This is the infrastructure play underneath the gallery.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The function calling demo on-device is the real headline here. If Gemma 4 can handle tool use locally, that's a viable path to offline agents on Android — which opens up use cases in low-connectivity environments that were impossible before. The AICore integration means you write to one API and the OS handles the model.

The Skeptic

The Skeptic

Reality Check

Skip

On-device model performance is still heavily hardware-gated — Gemma 4 running well on a Pixel 9 Pro doesn't mean it runs acceptably on the median Android device. Google controls the showcase, so the benchmarks are cherry-picked for their best hardware. Until AICore reaches broad adoption, this is a preview for early adopters.

The Futurist

The Futurist

Big Picture

Ship

The combination of AICore (OS-level model runtime) and on-device function calling is the blueprint for AI that survives network failures, regulatory data-residency requirements, and cloud cost pressures. Google is betting that the edge is where AI matures — this gallery is the proof of concept.

The Creator

The Creator

Content & Design

Ship

Audio transcription and translation that works offline and doesn't store your recordings anywhere is genuinely appealing for journalists, field researchers, and creators in low-connectivity areas. The privacy story alone makes this worth installing.

Community Sentiment

Overall640 mentions
70% positive21% neutral9% negative
Hacker News140 mentions

AICore developer preview and hardware requirements

Reddit190 mentions

Gemma 4 on-device performance vs. cloud models

Twitter/X310 mentions

Privacy-first on-device inference without cloud calls