Back to reviews
Apfel

Apfel

Your Mac's hidden on-device LLM, finally set free

Apfel is a Swift CLI that does something Apple didn't: it exposes the on-device LLM baked into every Apple Intelligence-enabled Mac as a proper OpenAI-compatible local server running at localhost:11434. Any app that speaks to Ollama's API — LM Studio, Continue, OpenWebUI, your own scripts — can now route requests to Apple's FoundationModels framework without modification. The feature set is more complete than most indie wrappers: streaming responses, tool calling with MCP support, file attachments, an interactive chat mode, and a debug SwiftUI GUI for inspecting token flow. Inference is fully on-device with no API keys, no telemetry, and no cost beyond electricity. On an M-series Mac, it runs at native Apple Neural Engine speeds — typically 40-80 tokens/second depending on the model variant active. The catch is real: you need macOS 26 Tahoe (currently in beta) and Apple Intelligence enabled. But for the tens of millions of Apple Silicon Mac users who already qualify or will soon, this is the quiet unlock of a model they already own. The "your Mac already has a free LLM" framing is resonating — the repo hit 3,500 stars in days.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

If you're already on the Tahoe beta, this is an instant install. Drop-in Ollama compatibility means every tool I already use just works — no friction, no cost. The MCP + tool calling support is unexpectedly polished for a one-dev project.

The Skeptic

The Skeptic

Reality Check

Skip

The 'free LLM on your Mac' pitch is compelling but the reality is gated behind a beta OS most professionals won't run for months. Apple's FoundationModels API can also change or restrict access at any time — this kind of undocumented wrapper has a short shelf life if Apple decides to lock it down.

The Futurist

The Futurist

Big Picture

Ship

Apple quietly shipped a capable on-device model and Apfel is the key that unlocks it for the developer ecosystem. This is a preview of a future where every device has sovereign AI — no network, no subscription, no permission slip from a cloud provider.

The Creator

The Creator

Content & Design

Ship

Running AI locally for writing assistance without sending my drafts to a cloud feels like a material privacy win. Once macOS Tahoe ships properly, this is going to be the default starting point for privacy-conscious creators who already own a Mac.

Community Sentiment

Overall830 mentions
73% positive19% neutral8% negative
Hacker News230 mentions

Ollama-compatible — existing tooling just works

Reddit180 mentions

macOS 26 beta requirement is a dealbreaker for now

Twitter/X420 mentions

Free local LLM already on your Mac