Back to reviews
Google Gemma 4

Google Gemma 4

Google's first Apache 2.0 open model family with native multimodal

Gemma 4 is Google's newest open model family — E2B, E4B, 26B, and 31B sizes — built on Gemini 3 architecture. For the first time, Google has released Gemma under Apache 2.0, making the models fully commercial-friendly with no Google-specific use restrictions. Every model in the family is natively multimodal from training: text, image, video, and audio inputs are all first-class. Context windows run 128K–256K tokens depending on size, and the models include built-in function calling, structured JSON output, and agentic workflow support. The E2B and E4B variants target on-device mobile and laptop deployment, with native audio understanding designed for always-on assistant scenarios. NVIDIA has already published optimized Gemma 4 containers for RTX hardware. The Apache 2.0 license removes a major adoption barrier that held back Gemma 3 in commercial products. Gemma 4 landed at #1 on Hacker News with 1,400+ points — the open-source model community's reaction was immediate and enthusiastic.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

Apache 2.0 means I can embed it in commercial products without legal review overhead. Native audio + 256K context on a 26B model that runs on a single A100 is a killer combo for production agent work. This is the open model I've been waiting for.

The Skeptic

The Skeptic

Reality Check

Skip

Google has a history of releasing models and then quietly deprioritizing them once the PR cycle ends. Gemma 1 and 2 both got less maintenance than promised. The Apache license is great news, but trust has to be earned over time with consistent model updates.

The Futurist

The Futurist

Big Picture

Ship

Native multimodal understanding — including audio — on models small enough for phones changes what ambient computing looks like. Gemma 4 on-device could be the model layer for a generation of always-on smart devices that don't need cloud inference.

The Creator

The Creator

Content & Design

Ship

Image, video, and audio in one open model I can run locally? The creative tooling possibilities are enormous. I can build private multimodal workflows for client work without data leaving my machine. Apache 2.0 seals it — this is a Ship.

Community Sentiment

Overall1,450 mentions
75% positive17% neutral8% negative
Hacker News420 mentions

Apache 2.0 license as a first for Gemma

Reddit350 mentions

On-device E2B/E4B for mobile AI

Twitter/X680 mentions

Native multimodal from training vs. bolted-on vision