Mar 2026ComputingInterface Design

SphereOS

The Spatial Operating System for Your Digital Life

Full Concept

A reimagined computing interface that replaces app grids, tabs, and folder hierarchies with a zoomable 3D space of interactive orbs. You don't open apps. You navigate a living map of your digital world, guided by an AI consensus engine that routes queries to multiple models simultaneously.

The Interface Shift

Every major computing interface shift has followed the same arc, from impossible to inevitable to default. The GUI replaced the command line. Touch replaced the mouse. Conversational AI replaced form-based input. SphereOS proposes the next layer: spatial + AI navigation as the default interface for digital life.

Instead of app grids, browser tabs, and folder hierarchies, you navigate a zoomable 3D space of interactive orbs. Each orb represents a person, app, business, task, knowledge cluster, or digital service. You don't open apps. You navigate a living map that reorganizes around what you're working on, who you're talking to, and what you need next.

The Zoom Model

Navigation works through semantic zoom rather than page navigation. Zoomed out: your entire digital world as a field of orbs organized by category. Zoom in: the category expands into sub-orbs. Social becomes Instagram, Messages, LinkedIn. Zoom further: individual profiles, conversations, content. The depth of your digital world becomes spatial rather than hierarchical. You move through it rather than clicking through it.

SphereAI: The Consensus Engine

At the center of the system is SphereAI: not one AI, but a routing layer that queries multiple models simultaneously (OpenAI, Claude, Gemini, specialized models), compares their responses, scores confidence, resolves disagreements, and returns a single synthesized answer. You don't choose which AI to use. SphereAI selects the best available answer for your specific query from the models available at that moment.

This makes SphereOS model-agnostic infrastructure, the layer above AI rather than a bet on any single model.

The Hardware Roadmap

Phase 1, Mobile and desktop: SphereOS as a spatial browser layer running on existing devices. The interface model validated before any hardware commitment.

Phase 2, Wearables: Devices like Meta Ray-Ban glasses already make a glanceable, always-on spatial layer practical today. SphereOS becomes the navigation model for that layer. What you see when you look at a business, a person, or a location is not a search result but an orb you can enter.

Phase 3, Neural interfaces: As technology like Neuralink matures, SphereOS becomes the natural interface layer between human intention and digital action. You don't speak or gesture. You navigate by thought. The orbs respond to where attention goes.

This is not science fiction sequencing. It's the same arc that every interface shift has followed, described honestly from where we are now.

The MVP Wedge

Buildable today with Three.js, React Three Fiber, Supabase, and the Anthropic API: a Local Business Sphere (search → businesses appear as orbs → tap to call, book, message, review) or a Personal Day Sphere (calendar, tasks, reminders, AI daily briefing in spatial form). Either is a contained, demonstrable version of the concept.

*Published as an open concept. Free to build on.*

Share on X

If this resonated, follow the build. I write when something ships, breaks, or changes my thinking.

← Back to Ideas