Apple for experience. x86 for scale.
You are not selling boxes. You are selling a coordinated node stack. The Apple path gives you the cleanest plug-and-play local AI experience. The x86 path gives you budget flexibility at the low end and GPU-backed expansion at the high end. You need both.
Obsidian Personal
Apple Mac mini M2 · 16 GB unified · 1 TB
Best Apple entry
Affordable entry node that runs small local LLMs smoothly with almost no setup friction.
Unified memory behaves like VRAM, so Ollama and LM Studio feel unusually smooth for the size and price.
- Platform
- Apple Silicon
- AI accel
- Unified memory
- Model fit
- 7B-8B local models
- Upgrades
- None
Silent desktop AI for personal assistants, Q&A and offline drafting.
Obsidian Flex
GMKtec Ryzen 7 8845HS · 32 GB
Flexible non-Apple budget node
A lower-cost Windows/Linux node with more RAM flexibility and a strong integrated GPU for local AI.
This is the strongest value-oriented alternative if you want higher RAM headroom without moving into Apple pricing.
- Platform
- x86 Ryzen
- AI accel
- Radeon 780M iGPU
- Model fit
- 7B-13B local models
- Upgrades
- Moderate
Budget-conscious local AI, Linux-first setups and users who want more tweakability.
Obsidian Core
Apple Mac mini M4 · 32 GB unified
Minimum serious sellable configuration
Balanced Apple-first node for continuous local AI workloads, retrieval and multi-agent prototypes.
The base M4 exists around 929 EUR, but the 32 GB build is the first configuration worth selling for sustained local AI.
- Platform
- Apple Silicon
- AI accel
- Neural Engine
- Model fit
- 7B-13B local models
- Upgrades
- None
The best balance of simplicity, silence and useful local model headroom.
Obsidian Pro
Apple Mac mini M4 Pro · 64 GB unified
Best Apple high-end
High-memory compact node for larger local models, multi-agent orchestration and heavier professional workflows.
Apple Silicon is fixed at purchase time. RAM and GPU are not field-upgradeable, so this is the tier to buy once and size correctly.
- Platform
- Apple Silicon
- AI accel
- Neural Engine + stronger GPU
- Model fit
- 13B-30B local models
- Upgrades
- None
Team-level local AI, orchestration-heavy work and advanced secure inference on the desk.
Obsidian Edge
Minisforum AI X1 Pro · Ryzen AI 9
Scale-first non-Apple path
Expandable node for heavier inference, shared usage and future GPU-backed scaling beyond the Mac mini ceiling.
This is the node to choose when you care more about expansion, eGPU support and scale than absolute simplicity.
- Platform
- x86 Ryzen AI
- AI accel
- NPU + eGPU path
- Model fit
- 13B-70B+ with expansion
- Upgrades
- High
Server-grade local AI, multi-user deployments and the path toward 70B-scale setups.
Reality check
Mac minis are excellent AI appliances precisely because they are silent, stable and frictionless. The tradeoff is finality. RAM and GPU are fixed at purchase time. That means your Apple lineup has to be sold in the right memory configuration up front.
That is why the M4 32 GB build is the minimum serious Core unit, and why the x86 Edge node exists at the top of the stack.
Packaging rule
Apple is the best path when the priority is silence, simplicity and a premium desktop experience. x86 is the right path when the priority is expansion, shared usage and future GPU-backed scale.
- Apple = experience + simplicity
- x86 = power + scalability
- The Obsidian line should present both as first-class nodes
Capability overview
| Capability | Personal Apple Mac mini M2 · 16 GB unified · 1 TB €579.00 | Flex GMKtec Ryzen 7 8845HS · 32 GB €359.99 | Core Apple Mac mini M4 · 32 GB unified €1,143.97 | Pro Apple Mac mini M4 Pro · 64 GB unified €1,903.99 | Edge Minisforum AI X1 Pro · Ryzen AI 9 €799.00 |
|---|---|---|---|---|---|
| Architecture | Apple Silicon | x86 Ryzen | Apple Silicon | Apple Silicon | x86 Ryzen AI |
| Memory | 16 GB | 32 GB | 32 GB | 64 GB | 96 GB |
| GPU path | Integrated only | Integrated only | Integrated only | GPU-ready | GPU-ready |
| AI acceleration | Unified memory | Radeon 780M iGPU | Neural Engine | Neural Engine + stronger GPU | NPU + eGPU path |
| Model capability | 7B-8B local models | 7B-13B local models | 7B-13B local models | 13B-30B local models | 13B-70B+ with expansion |
| Upgradeability | None | Moderate | None | None | High |
| Noise / efficiency | 5/5 quiet | 4/5 quiet | 5/5 quiet | 5/5 quiet | 3/5 workstation |
| Best use | Silent desktop AI for personal assistants, Q&A and offline drafting. | Budget-conscious local AI, Linux-first setups and users who want more tweakability. | The best balance of simplicity, silence and useful local model headroom. | Team-level local AI, orchestration-heavy work and advanced secure inference on the desk. | Server-grade local AI, multi-user deployments and the path toward 70B-scale setups. |