ODEI — Your Personal AI Operating System
The gap isn't in what AI can do. It's in what AI is allowed to do.
AI models advance monthly. The way you use them hasn't changed in two years.
You ask. It answers. You forget. It forgets.
The most powerful technology in history, reduced to a glorified
search bar.
It's not a technology problem — today's AI models are extraordinary. The limitation is architectural: they're designed as products you visit, not partners who know you. They optimize for engagement metrics, not your life outcomes. You adapt to their interface — they never adapt to your reality.
ODEI changes what AI is allowed to do.
Not a chatbot. Not an app.
A persistent, private, proactive intelligence layer
that works for you — even when you're not looking.
Everyone has the same AI. If your value is prompting — you're competing with everyone.
Why you have no moat
Your "AI advantage"
Your labor
Every hour in this loop is an hour anyone else could do. No compounding. No leverage. Just labor.
What this loop actually costs you
Time tax
You spend hours per week re-explaining, fixing, and following up. None of it compounds.
Attention tax
You stay in the loop as executor, verifier, and reminder. Context switching becomes the job.
Failure tax
Follow-ups slip. Deadlines drift. You only notice when it's already late.
What you don't own
Context
Your history lives in their UI. Switch vendors — lose everything.
Rules
No enforceable policies. Every session is a new negotiation.
Loop
Drafts aren't results. You copy, paste, send, check, fix.
If you are the glue — you're the part that gets replaced.
This hurts most if you are:
- Managing multiple projects or people
- Relying on follow-ups and deadlines
- Forgetting things because your system doesn't remember
- Feeling busy but watching outcomes still slip
If none of this applies — assistants are enough.
Own the infrastructure, not the chatbot
Interfaces talk to you. Infrastructure carries responsibility when you forget.
From replaceable to irreplaceable
You are the system
- Memory lives in your head
- Rules reset every session
- Execution is manual labor
- Anyone with prompts = same output
You own the system
- Memory compounds over time
- Rules persist and enforce
- AI does the labor
- Your architecture is unique
Models are interchangeable — your system isn't.
Others can prompt — they can't replicate your compound memory.
That's the difference between renting and owning.
The governance layer for personal AI.
Policy-controlled actions. Persistent context. Auditable outcomes.
Observe. Decide. Act. Verify. Evolve.
Outcome governance for personal workflows. Policy-bound. Measurable. Reviewable.
What ODEI addresses.
Every action is reviewable.
Receipts are designed to be tamper-evident and exportable. Intent, context, policy, action, result — all reviewable.
Automation ships only when performance improves.
Four primitives that connect the system.
Ready to own your AI infrastructure?
Built for founders, operators, and creators with too much context.
Request Early Access →They're all building agents. We're building the next level.
An operational taxonomy of AI autonomy—from informational systems to symbiotic AI. Five levels. Only one creates true partnership. ODEI operates at Level 5.
| Level | Step Trigger | AI Initiative | Goal g / Utility | Task-state st | Execute | Auto Loop | Action Space | Control | Persistent Memory | User Model | Policy Update | Audit | Scientific Class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
L1
Informational AI
|
User | ✗ | ✗ | ✗ | ✗ | ✗ | Only y ∈ Ytext/response | Human-in-control | ✗ | ✗ | ✗ | ✗ | Non-agent information systems, decision-support (read-only) |
|
L2
Reactive AI
|
User | ✗ | ✗ | ✗h ≠ st | ✗ | ✗ | y + dialogno execute | Human-in-control | ✗ | ✗ | ✗ | ✗ | Reactive agents (AIMA), interactive assistants |
|
L3
Mixed-Initiative AI
|
User + AI alerts | ✓ | ✗ | ✗ | ✗ | ✗ | suggest ask warnno execute | Human-in-controlhuman is driver | ◊ | ◊ | ✗ | ✗ | Mixed-initiative interaction (HCI + AI) |
|
L4
Delegated / Agentic AI
|
AI within taskno new request | ✓ | ✓ | ✓ | ✓ | ✓ | Bounded Aenvtools/APIs | Human-on-the-loop | ◊ | ◊ | ✗ | ◊ | Goal-based / utility-based agents (AIMA), autonomous agents under supervision |
|
L5
Symbiotic AI
|
Long co-adaptive loop | ✓ | ✓shared | ✓ | ✓ | ✓ | Aenv + policies+ reconfiguration | Human-on-the-loop+ co-adaptation | ✓ | ✓ | ✓by outcomes | ✓ | Co-adaptive systems, human–AI teaming |
L4 Is Not L5
Agents that execute tasks are not agents that share your life.
What Defines Each Level
Three planes work in concert: Memory holds the world model, Control makes decisions, Execution changes reality. These 12 properties separate agents from tools—and L5 from everything else.
Closed-loop Autonomy
Act → Verify → Decide-next cycle
Bounded Initiative
Proposals with thresholds, not spam
Goal + Constraints
Success criteria with built-in limits
State Model st
Working state + pause/resume + IPOV
External Execute
Real actions in external environment
Capability Registry
Bounded + enforced action space
Human Governance
Authority matrix + veto + escalation
Cybernetic Stability
Budgets, cooldowns, rate limits
Sovereign Memory
Persistent + provenance + read-before-decide
Neo4j Memory Atlas + History SQLiteUser Model
Editable + consent + affects decisions
Human node, Context, preferencesPolicy Evolution
Versioned + approval + rollback + drift
Patches → diff → human gatePortability + Audit + Synergy
Export/import + receipts + S>0 gate
Right to exit, tamper-evident ledgerWhere Big Tech Stands
They have the resources. They don't have the architecture.
Reactive with optional memory. Tool use when prompted. Human always initiates.
Reactive with memory. Strong reasoning, tools on request. No cross-session learning.
Reactive with Google integration. Can access user data. No autonomous initiative.
Task-bound browser agent. Executes multi-step workflows. Dies when task completes.
Desktop automation via MCP. Autonomous within task scope. No persistent state.
Multimodal agent with extensions. Tool orchestration. Task-scoped autonomy.
What L5 Requires
(user-owned, portable)
(goals, patterns, decisions)
(from outcomes)
(WHY / MEMORY / POLICY)
infrastructure
What if AI could truly know you? Not remember. Know.
L5 is not a marketing level. It's a scientifically defined agent class — the first where human and AI form a single cognitive system. Four decades of research. One breakthrough.
Extending AIMA Agent Classification
Russell & Norvig's canonical taxonomy (AIMA, 1995–2020) defines five agent classes: Reactive, Model-based, Goal-based, Utility-based, and Learning agents. Symbiotic AI extends this taxonomy by adding invariants that no existing class captures.
- Single-agent optimization
- Human as operator or supervisor
- Static goal specification
- Unidirectional adaptation
- Joint human-AI system optimization
- Human as co-participant
- Dynamic goal co-evolution
- Bidirectional co-adaptation
L4 agents may have memory and autonomy, but lack mandatory User Model, bidirectional adaptation, and divided subjectivity. L5 is a new class — different invariants, different engineering requirements.
What Changes at L5
- Memory is theirs, resets on their terms
- AI optimizes for AI's objective
- Human supervises, AI executes
- Adaptation is one-way: AI learns you
- Goals are given, then pursued
- Memory is yours, portable, sovereign
- System optimizes for joint outcome
- Human and AI co-participate
- Both adapt: you learn AI, AI learns you
- Goals co-evolve through partnership
L4 agents have autonomy. L5 agents have partnership.
The Four Axioms of Symbiosis
L5 is not defined by features. It's defined by invariants — properties that must hold for the system to qualify as symbiotic.
Mandatory User Model
The agent maintains a persistent model of your goals, preferences, cognitive state, and behavioral patterns. Not optional. Not a feature. A requirement.
Bidirectional Policy Update
Agent policy changes based on outcomes + your feedback. Not just reward signals — explicit preference learning. You shape how the AI thinks.
Co-Adaptation Loop
Both parties evolve. AI learns your preferences. You learn to formulate, calibrate trust, and evolve processes. The system grows together.
Divided Subjectivity
Agent has no goals outside your framework, but possesses execution autonomy. Source of goals ≠ source of action. You set direction. AI executes.
When Does Partnership Exist?
True synergy exists only when together we outperform the best of us alone. S > 0 is the threshold. Below it, you have assistance. Above it, you have partnership.
How Well Does AI See Your World?
Sync (σ) measures how observable your current world is to ODEI right now. Without visibility, there is no governance. σ = 1.0 means full, fresh, consistent context. σ = 0.0 means blindness — any recommendation is noise.
Geometric mean ensures one blind spot cuts the entire score. This is intentional — partial observability produces unreliable governance.
"AI sees everything" — measured real-time
"We outperform best agent" — measured weekly
Six Research Traditions
Human-AI Teaming
Treating AI as teammate, not tool. Shared mental models and mutual comprehension.
National Academies (2022); O'Neill et al. (2020); Lyons et al. (2021)Co-Adaptive Systems
Both agents learn and adapt simultaneously. Mutual adjustment toward shared goals.
van Zoelen et al. (2021); Nikolaidis et al. (2017); Döppner et al. (2019)Adjustable Autonomy
Dynamic allocation of control based on context, workload, and human state.
Beer et al. (2014); Sheridan (1992); Parasuraman et al. (2000)Shared Autonomy
Blending human intent with AI capability for superior joint outcomes.
Dragan & Srinivasa (2013); Javdani et al. (2018)Theory of Mind Alignment
AI maintains accurate model of user's mental state. Active sensing minimizes the "perspective gap" between human intent and AI action.
Theory of Mind in human-robot interaction literature.Joint Process Modeling
Human and AI as single coupled dynamical system. Latency, friction, and context-switching treated as noise to be systematically eliminated.
Dynamical systems theory applied to human-AI coordination.Symbiotic AI is a co-adaptive human–AI system in which an autonomous agent operates with persistent memory, policy update, and user model, while the human remains the source of goals, norms, and responsibility in a human-on-the-loop configuration.
This is not philosophy. This is a system class.
They call it chat history. We call it memory.
Chat history is theirs—resets on their terms, doesn't affect behavior. Memory in ODEI is yours—persistent, sovereign, and shapes every decision.
Three Architectural Properties
Properties 9–11 from Agency Levels. What separates L5 from L4.
Persistent Memory
Cross-session memory that affects decisions. Not just "remember what we talked about"—but facts, beliefs, patterns, and outcomes that shape reasoning.
User Model
Persistent representation of you—goals, preferences, constraints, decision patterns. Not a profile you fill out. A model that emerges from observation.
Policy Update
Outcomes change strategy, not just state. When you approve, deny, or override—ODEI updates the operating rules. Your corrections become versioned policy.
Three Ownership Principles
Memory without ownership is surveillance.
Portable
Your memory is not locked to a model or vendor. Export anytime. Switch providers. Keep your accumulated context. The value you build stays with you.
Versioned
Every change is tracked. Policies have versions. Facts have timestamps. If something goes wrong, roll back. Trace the history of any decision.
Provenance
Every fact has a source. Every belief has evidence. Every policy has an origin. No "I just know"—full traceability from memory to action.
Your Data, Your Control
Sovereignty means you decide what stays, what goes, and who sees it.
What Is Stored
- Facts you confirm or correct
- Goals and deadlines you set
- Decisions and their outcomes
- Policies you approve
- Patterns detected (with consent)
What Is Not Stored
- Raw conversation transcripts
- Data you mark as ephemeral
- Third-party content without consent
- Biometric data (unless explicit)
- Anything you delete
One click. Immediate effect. No "30-day retention period".
{
"facts": [
{"id": "f_001", "content": "Alex owns design", "source": "email:2026-01-02", "confidence": 0.95}
],
"policies": [
{"id": "p_013", "rule": "notify_if_blocker > 48h", "version": 14, "status": "active"}
],
"goals": [
{"id": "g_003", "name": "Ship MVP", "deadline": "2026-01-15", "progress": 0.6}
]
}
Memory is what makes L5 possible. Without persistent memory, no user model. Without user model, no adaptation. Without adaptation, no partnership. Just another stateless assistant pretending to know you.
Building the future
takes time.
Each phase unlocks new capabilities. We ship when it's ready, not when it's rushed.
When all five run continuously, you have a personal OS.
Observe
AI that sees your world — memory, health, context, signals.
- Persistent memory across sessions
- Health data integration
- Source tracking for every fact
Decide
You are hereAI that knows what matters — rules, priorities, authority.
- Rule-based decision engine
- Priority system
- Authority checks before action
Act
AI that does, not just suggests — execution with accountability.
- Approval workflows
- Action receipts
- Supervised execution
Verify
AI that confirms results — outcome tracking, feedback loops.
- Outcome tracking
- Goal-action linking
- Success/failure detection
Evolve
AI that gets better — policy evolution from outcomes.
- Policy evolution from outcomes
- Pattern detection
- Adaptive behavior
Want to follow the journey?
Stay connected →Privacy is not a feature — it is an architectural requirement for S > 0. Without it, symbiosis is impossible.
Our Commitment
Your data lives on your devices by default. Cloud sync is opt-in and encrypted.
We do not sell, rent, or trade your personal data. Ever.
Every piece of data has a source, timestamp, and you can trace how it's used.
Disconnect integrations and delete data at any time with immediate effect.
All system actions are logged with receipts — you can verify what ODEI did and why.
What We Never Do
- Sell your data to third parties
- Train AI models on your data
- Share data without explicit consent
- Store more than necessary
- Hide how your data is used
- Make privacy opt-in
Data We Process
- Account data (optional). Email and authentication identifiers only if you create an account.
- Your content. Notes, tasks, goals, calendar events. Stored locally unless you enable sync.
- Connected services. With explicit permission: calendar, email, health data. You control connections.
- System metadata. Minimal diagnostics for reliability. Analytics are anonymized and opt-in only.
- Working model. ODEI maintains context from data you provide — all with clear provenance.
Your Rights
Export all data anytime
Remove any or all data
JSON, CSV formats
Fix inaccuracies
Disable any feature
Effective date: December 18, 2025
Ready to own
your AI?
ODEI is in active development. We're building the infrastructure for true human-AI partnership.
We'll respond within 48h
The future of AI is not assistance — it's partnership.