Back to reviews
Apfel

Apfel

Free CLI for Apple's on-device LLM — no API key, no downloads, runs on macOS

Apfel is an open-source command-line tool that unlocks Apple's built-in Foundation Model (shipped with macOS Tahoe) via a clean CLI, an OpenAI-compatible local server on port 11434, and an interactive chat mode. No model download, no API key, no configuration — if you're on Apple Silicon running macOS Tahoe, the model is already there. The OpenAI-compatible server mode is the clever move: any tool built on the OpenAI SDK can point at localhost:11434 and use Apple's on-device ~3B model for free, with complete privacy. The MCP support adds external tool-calling, making it genuinely useful for shell automation, text transformation, and local agent workflows. The honest constraints: 4,096-token context (~3,000 words) and mixed 2-bit/4-bit quantization mean this isn't a replacement for cloud models on hard tasks. But for scripting, classification, summarization, and quick transformations — all offline, all private, all free — Apfel makes the underutilized neural engine on every Mac actually accessible.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

OpenAI-compatible server on localhost means I can prototype automations and scripts against a real LLM without paying for API calls or waiting on rate limits. The pipe-friendly CLI with proper exit codes is exactly what shell scripting needs. For Mac-native tooling, this is a genuine gap-filler.

The Skeptic

The Skeptic

Reality Check

Skip

A 4,096-token context and ~3B quantized model will fail on anything non-trivial — complex coding, factual recall, multi-step reasoning. You'd still reach for Claude or GPT-4 for real work, making this a toy for most professional use cases. Also, it only runs on macOS Tahoe, which dramatically limits adoption right now.

The Futurist

The Futurist

Big Picture

Ship

Every Apple Silicon Mac now ships with a neural engine and a capable on-device LLM — Apfel is just the first tool to make that accessible via standard interfaces. This is a preview of the world where local models handle routine tasks completely off the network, with cloud models reserved for genuinely hard inference.

The Creator

The Creator

Content & Design

Ship

Quick summaries, translation, text classification without pasting anything into a cloud service — the privacy angle alone is worth it for sensitive client work. MCP support means I can hook it into my local creative workflows. The zero-config setup removed every excuse I had not to try it.

Community Sentiment

Overall830 mentions
73% positive18% neutral9% negative
Hacker News340 mentions

Zero-config local LLM on Apple Silicon

Reddit200 mentions

OpenAI-compatible server mode

Twitter/X290 mentions

Privacy and offline-first AI