Configure →
Obsidian stack

Apple for experience. x86 for scale.

You are not selling boxes. You are selling a coordinated node stack. The Apple path gives you the cleanest plug-and-play local AI experience. The x86 path gives you budget flexibility at the low end and GPU-backed expansion at the high end. You need both.

Tier 1 · Apple desk node

Obsidian Personal

Apple Mac mini M2 · 16 GB unified · 1 TB

4.7 / 5

Best Apple entry

Affordable entry node that runs small local LLMs smoothly with almost no setup friction.

Unified memory behaves like VRAM, so Ollama and LM Studio feel unusually smooth for the size and price.

Platform
Apple Silicon
AI accel
Unified memory
Model fit
7B-8B local models
Upgrades
None
Best use

Silent desktop AI for personal assistants, Q&A and offline drafting.

Hardware from€579.00
Tier 1 · x86 desk node

Obsidian Flex

GMKtec Ryzen 7 8845HS · 32 GB

4.6 / 5

Flexible non-Apple budget node

A lower-cost Windows/Linux node with more RAM flexibility and a strong integrated GPU for local AI.

This is the strongest value-oriented alternative if you want higher RAM headroom without moving into Apple pricing.

Platform
x86 Ryzen
AI accel
Radeon 780M iGPU
Model fit
7B-13B local models
Upgrades
Moderate
Best use

Budget-conscious local AI, Linux-first setups and users who want more tweakability.

Hardware from€359.99
Tier 2 · Apple core node

Obsidian Core

Apple Mac mini M4 · 32 GB unified

4.8 / 5

Minimum serious sellable configuration

Balanced Apple-first node for continuous local AI workloads, retrieval and multi-agent prototypes.

The base M4 exists around 929 EUR, but the 32 GB build is the first configuration worth selling for sustained local AI.

Platform
Apple Silicon
AI accel
Neural Engine
Model fit
7B-13B local models
Upgrades
None
Best use

The best balance of simplicity, silence and useful local model headroom.

Hardware from€1,143.97
Tier 3 · Apple pro node

Obsidian Pro

Apple Mac mini M4 Pro · 64 GB unified

4.9 / 5

Best Apple high-end

High-memory compact node for larger local models, multi-agent orchestration and heavier professional workflows.

Apple Silicon is fixed at purchase time. RAM and GPU are not field-upgradeable, so this is the tier to buy once and size correctly.

Platform
Apple Silicon
AI accel
Neural Engine + stronger GPU
Model fit
13B-30B local models
Upgrades
None
Best use

Team-level local AI, orchestration-heavy work and advanced secure inference on the desk.

Hardware from€1,903.99
Tier 4 · expandable edge node

Obsidian Edge

Minisforum AI X1 Pro · Ryzen AI 9

Scale-first non-Apple path

Expandable node for heavier inference, shared usage and future GPU-backed scaling beyond the Mac mini ceiling.

This is the node to choose when you care more about expansion, eGPU support and scale than absolute simplicity.

Platform
x86 Ryzen AI
AI accel
NPU + eGPU path
Model fit
13B-70B+ with expansion
Upgrades
High
Best use

Server-grade local AI, multi-user deployments and the path toward 70B-scale setups.

Hardware from€799.00

Reality check

Mac minis are excellent AI appliances precisely because they are silent, stable and frictionless. The tradeoff is finality. RAM and GPU are fixed at purchase time. That means your Apple lineup has to be sold in the right memory configuration up front.

That is why the M4 32 GB build is the minimum serious Core unit, and why the x86 Edge node exists at the top of the stack.

Packaging rule

Apple is the best path when the priority is silence, simplicity and a premium desktop experience. x86 is the right path when the priority is expansion, shared usage and future GPU-backed scale.

  • Apple = experience + simplicity
  • x86 = power + scalability
  • The Obsidian line should present both as first-class nodes
Comparison

Capability overview

Capability
Personal
Apple Mac mini M2 · 16 GB unified · 1 TB
€579.00
Flex
GMKtec Ryzen 7 8845HS · 32 GB
€359.99
Core
Apple Mac mini M4 · 32 GB unified
€1,143.97
Pro
Apple Mac mini M4 Pro · 64 GB unified
€1,903.99
Edge
Minisforum AI X1 Pro · Ryzen AI 9
€799.00
ArchitectureApple Siliconx86 RyzenApple SiliconApple Siliconx86 Ryzen AI
Memory16 GB32 GB32 GB64 GB96 GB
GPU pathIntegrated onlyIntegrated onlyIntegrated onlyGPU-readyGPU-ready
AI accelerationUnified memoryRadeon 780M iGPUNeural EngineNeural Engine + stronger GPUNPU + eGPU path
Model capability7B-8B local models7B-13B local models7B-13B local models13B-30B local models13B-70B+ with expansion
UpgradeabilityNoneModerateNoneNoneHigh
Noise / efficiency5/5 quiet4/5 quiet5/5 quiet5/5 quiet3/5 workstation
Best useSilent desktop AI for personal assistants, Q&A and offline drafting.Budget-conscious local AI, Linux-first setups and users who want more tweakability.The best balance of simplicity, silence and useful local model headroom.Team-level local AI, orchestration-heavy work and advanced secure inference on the desk.Server-grade local AI, multi-user deployments and the path toward 70B-scale setups.
Local AI devices — SELBSAI