ODEI — Your Personal AI Operating System

> AI generates text. You do all the work. > Prompt. Copy. Paste. Verify. Repeat.
> Do you agree? _
Stop prompt-chasing. Own the control layer.
Introduction

The gap isn't in what AI can do. It's in what AI is allowed to do.

AI models advance monthly. The way you use them hasn't changed in two years.
You ask. It answers. You forget. It forgets.
The most powerful technology in history, reduced to a glorified search bar.

Today's AI
What's Possible
Introduction
Today's AI What's Possible
01
Waits for your prompt Initiates when it matters
Sees your calendar, notices patterns, acts before you ask.
02
Black box decisions Fully explainable reasoning
Every recommendation is traceable. No magic, just logic you can audit.
03
Generates text Executes real outcomes
Books the meeting. Sends the email. Moves the task. Done, not drafted.
04
Static responses Continuous co-evolution
Learns your preferences, updates its understanding, grows with you.
05
Their platform, their rules Your system, your data
Runs locally. You own the memory. No corporate surveillance.
06
Forgets everything Remembers what matters
Your goals, your context, your history — persistent across sessions.
Why This Gap Exists

It's not a technology problem — today's AI models are extraordinary. The limitation is architectural: they're designed as products you visit, not partners who know you. They optimize for engagement metrics, not your life outcomes. You adapt to their interface — they never adapt to your reality.

ODEI changes what AI is allowed to do.

Not a chatbot. Not an app.
A persistent, private, proactive intelligence layer
that works for you — even when you're not looking.

1
THE PROBLEM

Everyone has the same AI. If your value is prompting — you're competing with everyone.

THE MATH

Why you have no moat

Your "AI advantage"

Same models as everyone 0 moat
Prompts anyone can learn 0 moat
Context in your head, not a system 0 moat

Your labor

Re-explaining every session replaceable
Copy-paste-verify loop replaceable
Manual follow-ups replaceable
Net position: You're overhead, not an asset.
Re-explain
Generate
Copy
Paste
Verify
Repeat
YOU

Every hour in this loop is an hour anyone else could do. No compounding. No leverage. Just labor.

THE TAX

What this loop actually costs you

Time tax

You spend hours per week re-explaining, fixing, and following up. None of it compounds.

+2h / week

Attention tax

You stay in the loop as executor, verifier, and reminder. Context switching becomes the job.

constant interrupts

Failure tax

Follow-ups slip. Deadlines drift. You only notice when it's already late.

missed outcomes
THE TRAP

What you don't own

Context

Your history lives in their UI. Switch vendors — lose everything.

Rules

No enforceable policies. Every session is a new negotiation.

if (amount > $500) → ask
ignored

Loop

Drafts aren't results. You copy, paste, send, check, fix.

Draft v1
Draft v2
Draft v3...
A junior with the same prompts produces the same output. A template could replace your "process".

If you are the glue — you're the part that gets replaced.

REALITY CHECK

This hurts most if you are:

  • Managing multiple projects or people
  • Relying on follow-ups and deadlines
  • Forgetting things because your system doesn't remember
  • Feeling busy but watching outcomes still slip

If none of this applies — assistants are enough.

THE SHIFT

Own the infrastructure, not the chatbot

RENTING
Context locked in vendor UI
Rules reset every session
Manual copy-paste loop
No audit trail
Vendor lock-in
ODEI
OWNING
Memory portable, versioned, yours
Policy rules that persist
Execution actions, not drafts
Audit receipts for everything
Model swap anytime
Assistants are an interface. ODEI is infrastructure.

Interfaces talk to you. Infrastructure carries responsibility when you forget.

THE ESCAPE

From replaceable to irreplaceable

REPLACEABLE

You are the system

  • Memory lives in your head
  • Rules reset every session
  • Execution is manual labor
  • Anyone with prompts = same output
IRREPLACEABLE

You own the system

  • Memory compounds over time
  • Rules persist and enforce
  • AI does the labor
  • Your architecture is unique

Models are interchangeable — your system isn't.

Others can prompt — they can't replicate your compound memory.

That's the difference between renting and owning.

2
PRODUCT

The governance layer for personal AI.

Policy-controlled actions. Persistent context. Auditable outcomes.

TYPICAL ASSISTANT
PROMPT-DRIVEN SESSION-BASED CONTEXT SUGGESTS TEXT, NOT OUTCOMES
ODEI SYSTEM
Checked calendar
Proposed reschedule
Requested approval
Sent invite
Logged receipt
EVENT-DRIVEN PERSISTENT STATE AND MEMORY EXECUTES WITH VERIFICATION
THE GOVERNANCE LOOP

Observe. Decide. Act. Verify. Evolve.

Outcome governance for personal workflows. Policy-bound. Measurable. Reviewable.

Governing
Your Outcomes
24/7 · Autonomous
STEP 1 Observe Calendar, health, memory, signals
STEP 2 Decide Priority + authority check
STEP 3 Act Execute or request approval
STEP 4 Verify Confirm outcome achieved
STEP 5 Evolve Update policy + memory
SIX PROBLEMS SOLVED

What ODEI addresses.

01 INITIATIVE
Problem: AI waits for prompts
Solved: Goal-triggered actions (within policy)
02 TRANSPARENCY
Problem: Black box decisions
Solved: Intent + Context + Policy → Action (auditable)
03 EXECUTION
Problem: Text suggestions only
Solved: Outcome execution with verification
04 ADAPTATION
Problem: Static configuration
Solved: Versioned policies with feedback loops
05 CONTROL
Problem: Platform lock-in
Solved: Local-first, exportable, authority tiers
06 CONTINUITY
Problem: Session amnesia
Solved: Constitutional Memory with drift detection
AUDITABILITY

Every action is reviewable.

Action triggered: Goal drift detected → Priority alert sent
RECEIPT #2026-01-04-0847
INTENT Notify user of goal drift
CONTEXT Goal "Ship ODEI" · 3 blockers · Last session 17d ago
POLICY Alert if goal drift > 3 days v14
ACTION Priority alert sent
APPROVAL User confirmed re-prioritization
ROLLBACK Available

Receipts are designed to be tamper-evident and exportable. Intent, context, policy, action, result — all reviewable.

AUTHORITY

Set permission tiers per workflow.

Observe Log activity only
Suggest Recommendations only
Confirm Ask before acting
Supervised Act, notify, rollback
Autonomous Act within policy

Different workflows require different risk profiles.

THE GATE

Automation ships only when performance improves.

σ = P(h+ODEI) − max(Ph, PAI)
Ph Human alone
PAI AI alone
P(h+ODEI) Human + ODEI
σ > 0
Ship
σ = 0
Keep manual
σ < 0
Block
Measured by:
time-to-outcome error rate retries cognitive load
THE ARCHITECTURE

Four primitives that connect the system.

01 Constitutional Memory
Goals, constraints, and history with provenance.
02 Governance Engine
Observe, decide, approve, act, verify, learn.
03 Execution Layer
Tool actions via scoped permissions and workflow policies.
04 Receipts
Intent, context, policy, action, result. Exportable and reviewable.

Ready to own your AI infrastructure?

Built for founders, operators, and creators with too much context.

Request Early Access
3
Operational Taxonomy

They're all building agents. We're building the next level.

An operational taxonomy of AI autonomy—from informational systems to symbiotic AI. Five levels. Only one creates true partnership. ODEI operates at Level 5.

Required feature
Absent by definition
Possible, not defining
exec External action
Level Step Trigger AI Initiative Goal g / Utility Task-state st Execute Auto Loop Action Space Control Persistent Memory User Model Policy Update Audit Scientific Class
L1 Informational AI
User Only y ∈ Ytext/response Human-in-control Non-agent information systems, decision-support (read-only)
L2 Reactive AI
User h ≠ st y + dialogno execute Human-in-control Reactive agents (AIMA), interactive assistants
L3 Mixed-Initiative AI
User + AI alerts suggest ask warnno execute Human-in-controlhuman is driver Mixed-initiative interaction (HCI + AI)
L4 Delegated / Agentic AI
AI within taskno new request Bounded Aenvtools/APIs Human-on-the-loop Goal-based / utility-based agents (AIMA), autonomous agents under supervision
L5 Symbiotic AI
Long co-adaptive loop shared Aenv + policies+ reconfiguration Human-on-the-loop+ co-adaptation by outcomes Co-adaptive systems, human–AI teaming
02 // The Difference

L4 Is Not L5

Agents that execute tasks are not agents that share your life.

L4 Agentic AI
Lifecycle Task-bound → dies
Initiative Within task only
User Model ◊ Optional
Policy Update ✗ None
Goal Frame Your task, its execution
The Line
L5 Symbiotic AI
Lifecycle Persistent → lives
Initiative Proactive, cross-context
User Model ✓ Mandatory
Policy Update ✓ By outcomes
Goal Frame Shared goals, co-evolution
03 // The Properties

What Defines Each Level

Three planes work in concert: Memory holds the world model, Control makes decisions, Execution changes reality. These 12 properties separate agents from tools—and L5 from everything else.

L4 Agentic Foundation Properties 1–8 define autonomous agents
01

Closed-loop Autonomy

Act → Verify → Decide-next cycle

02

Bounded Initiative

Proposals with thresholds, not spam

03

Goal + Constraints

Success criteria with built-in limits

04

State Model st

Working state + pause/resume + IPOV

05

External Execute

Real actions in external environment

06

Capability Registry

Bounded + enforced action space

07

Human Governance

Authority matrix + veto + escalation

08

Cybernetic Stability

Budgets, cooldowns, rate limits

L5 Symbiotic Extension Properties 9–12 create sovereign partnership
09

Sovereign Memory

Persistent + provenance + read-before-decide

Neo4j Memory Atlas + History SQLite
10

User Model

Editable + consent + affects decisions

Human node, Context, preferences
11

Policy Evolution

Versioned + approval + rollback + drift

Patches → diff → human gate
12

Portability + Audit + Synergy

Export/import + receipts + S>0 gate

Right to exit, tamper-evident ledger
L3 Assistants (ChatGPT, Claude, Gemini) lack all 12 properties. Reactive, stateless, generic.
L4 Agents (Devin, Operator) have properties 1–8. Autonomous but not adaptive.
L5 ODEI has all 12 — sovereign, governed, portable.
04 // The Glass Ceiling

Where Big Tech Stands

They have the resources. They don't have the architecture.

Consumer Products — L2–L3
ChatGPT App L2–L3

Reactive with optional memory. Tool use when prompted. Human always initiates.

Claude App L2–L3

Reactive with memory. Strong reasoning, tools on request. No cross-session learning.

Gemini App L2–L3

Reactive with Google integration. Can access user data. No autonomous initiative.

Agentic Products — L4
Operator L4

Task-bound browser agent. Executes multi-step workflows. Dies when task completes.

Computer Use L4

Desktop automation via MCP. Autonomous within task scope. No persistent state.

Project Astra L4

Multimodal agent with extensions. Tool orchestration. Task-scoped autonomy.

(as of January 2026)

Structural Barriers to L5

L5 isn't a feature they can add. Their architecture and business model are structurally incompatible.

Stateless Architecture

Optimized for API calls. Each request = new context. L5 needs persistent stateful agents.

Centralized Data = Their Asset

Your data on their servers = their value. L5 requires user-owned, local-first data.

Policy Update = Liability

If AI learns from feedback, it might learn "wrong". Public companies avoid this legally.

Deep User Model = Privacy Risk

True user model (goals, patterns, decisions) is a regulatory nightmare for public companies.

$
Revenue Model Conflict

L4 = many API calls per task. L5 = efficiency, fewer calls. Economically backwards for them.

Scale vs Depth

Optimized for millions with one model. L5 needs deep per-user personalization.

What L5 Requires

Sovereign memory
(user-owned, portable)
Persistent user model
(goals, patterns, decisions)
Policy evolution
(from outcomes)
Full audit trail
(WHY / MEMORY / POLICY)
Local-first
infrastructure
4
L5 Symbiotic AI

What if AI could truly know you? Not remember. Know.

L5 is not a marketing level. It's a scientifically defined agent class — the first where human and AI form a single cognitive system. Four decades of research. One breakthrough.

01 // Taxonomic Position

Extending AIMA Agent Classification

Russell & Norvig's canonical taxonomy (AIMA, 1995–2020) defines five agent classes: Reactive, Model-based, Goal-based, Utility-based, and Learning agents. Symbiotic AI extends this taxonomy by adding invariants that no existing class captures.

AIMA Classes (L1–L4)
  • Single-agent optimization
  • Human as operator or supervisor
  • Static goal specification
  • Unidirectional adaptation
Symbiotic AI (L5)
  • Joint human-AI system optimization
  • Human as co-participant
  • Dynamic goal co-evolution
  • Bidirectional co-adaptation

L4 agents may have memory and autonomy, but lack mandatory User Model, bidirectional adaptation, and divided subjectivity. L5 is a new class — different invariants, different engineering requirements.

The Boundary

What Changes at L5

L1–L4 Agents
  • Memory is theirs, resets on their terms
  • AI optimizes for AI's objective
  • Human supervises, AI executes
  • Adaptation is one-way: AI learns you
  • Goals are given, then pursued
L5
Symbiotic AI
  • Memory is yours, portable, sovereign
  • System optimizes for joint outcome
  • Human and AI co-participate
  • Both adapt: you learn AI, AI learns you
  • Goals co-evolve through partnership

L4 agents have autonomy. L5 agents have partnership.

Defining Laws

The Four Axioms of Symbiosis

L5 is not defined by features. It's defined by invariants — properties that must hold for the system to qualify as symbiotic.

AXIOM I

Mandatory User Model

The agent maintains a persistent model of your goals, preferences, cognitive state, and behavioral patterns. Not optional. Not a feature. A requirement.

AXIOM II

Bidirectional Policy Update

Agent policy changes based on outcomes + your feedback. Not just reward signals — explicit preference learning. You shape how the AI thinks.

AXIOM III

Co-Adaptation Loop

Both parties evolve. AI learns your preferences. You learn to formulate, calibrate trust, and evolve processes. The system grows together.

AXIOM IV

Divided Subjectivity

Agent has no goals outside your framework, but possesses execution autonomy. Source of goals ≠ source of action. You set direction. AI executes.

The Measure

When Does Partnership Exist?

S = P(h+ai) − max(Ph, Pai)
FIG.01 // SYNERGY_MODEL [LITERATURE]

True synergy exists only when together we outperform the best of us alone. S > 0 is the threshold. Below it, you have assistance. Above it, you have partnership.

Proof of synergy Human alone: 4h. AI alone: 3h. Together: 1h. S > 0 — symbiosis exists.
Context Integrity

How Well Does AI See Your World?

σ = (C · F · Q · K)1/4
FIG.02 // SYNC_MODEL [ENGINEERING]

Sync (σ) measures how observable your current world is to ODEI right now. Without visibility, there is no governance. σ = 1.0 means full, fresh, consistent context. σ = 0.0 means blindness — any recommendation is noise.

C Coverage Critical sources connected
F Freshness Data age vs decay rate (τ)
Q Quality Context completeness
K Consistency No conflicts or errors

Geometric mean ensures one blind spot cuts the entire score. This is intentional — partial observability produces unreliable governance.

Sync (σ)
process metric

"AI sees everything" — measured real-time

Synergy (S)
outcome metric

"We outperform best agent" — measured weekly

02 // Scientific Anchors

Six Research Traditions

A1

Human-AI Teaming

Treating AI as teammate, not tool. Shared mental models and mutual comprehension.

National Academies (2022); O'Neill et al. (2020); Lyons et al. (2021)
A2

Co-Adaptive Systems

Both agents learn and adapt simultaneously. Mutual adjustment toward shared goals.

van Zoelen et al. (2021); Nikolaidis et al. (2017); Döppner et al. (2019)
A3

Adjustable Autonomy

Dynamic allocation of control based on context, workload, and human state.

Beer et al. (2014); Sheridan (1992); Parasuraman et al. (2000)
A4

Shared Autonomy

Blending human intent with AI capability for superior joint outcomes.

Dragan & Srinivasa (2013); Javdani et al. (2018)
A5

Theory of Mind Alignment

AI maintains accurate model of user's mental state. Active sensing minimizes the "perspective gap" between human intent and AI action.

Theory of Mind in human-robot interaction literature.
A6

Joint Process Modeling

Human and AI as single coupled dynamical system. Latency, friction, and context-switching treated as noise to be systematically eliminated.

Dynamical systems theory applied to human-AI coordination.
Canonical Definition
Symbiotic AI is a co-adaptive human–AI system in which an autonomous agent operates with persistent memory, policy update, and user model, while the human remains the source of goals, norms, and responsibility in a human-on-the-loop configuration.

This is not philosophy. This is a system class.

Key References
Russell, S. & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. 4th ed.
National Academies (2022). Human-AI Teaming: State of the Art and Research Needs.
Beer, J.M. et al. (2014). Toward a Framework for Levels of Robot Autonomy in HRI. JHRI 3(2).
van Zoelen, E.M. et al. (2021). Identifying Interaction Patterns of Mutual Adaptation. Front. Robot. AI.
Nikolaidis, S. et al. (2017). Human-robot mutual adaptation in collaborative tasks. IJRR 36(5-7).
Synergy metric (σ) — internal ODEI engineering framework.
5
L5 Memory Architecture

They call it chat history. We call it memory.

Chat history is theirs—resets on their terms, doesn't affect behavior. Memory in ODEI is yours—persistent, sovereign, and shapes every decision.

01 // The Pillars

Three Architectural Properties

Properties 9–11 from Agency Levels. What separates L5 from L4.

09

Persistent Memory

Cross-session memory that affects decisions. Not just "remember what we talked about"—but facts, beliefs, patterns, and outcomes that shape reasoning.

Impl: Neo4j Memory Atlas + History SQLite
10

User Model

Persistent representation of you—goals, preferences, constraints, decision patterns. Not a profile you fill out. A model that emerges from observation.

Impl: Human node, Context, Preferences graph
11

Policy Update

Outcomes change strategy, not just state. When you approve, deny, or override—ODEI updates the operating rules. Your corrections become versioned policy.

Impl: Insights/Patterns → future behavior
Memory User Model Adaptation Partnership
02 // The Contract

Three Ownership Principles

Memory without ownership is surveillance.

P1

Portable

Your memory is not locked to a model or vendor. Export anytime. Switch providers. Keep your accumulated context. The value you build stays with you.

P2

Versioned

Every change is tracked. Policies have versions. Facts have timestamps. If something goes wrong, roll back. Trace the history of any decision.

P3

Provenance

Every fact has a source. Every belief has evidence. Every policy has an origin. No "I just know"—full traceability from memory to action.

03 // Your Rules

Your Data, Your Control

Sovereignty means you decide what stays, what goes, and who sees it.

What Is Stored

  • Facts you confirm or correct
  • Goals and deadlines you set
  • Decisions and their outcomes
  • Policies you approve
  • Patterns detected (with consent)

What Is Not Stored

  • Raw conversation transcripts
  • Data you mark as ephemeral
  • Third-party content without consent
  • Biometric data (unless explicit)
  • Anything you delete
Forget
Redact
🔒 Lock

One click. Immediate effect. No "30-day retention period".

Export Preview JSON
{
  "facts": [
    {"id": "f_001", "content": "Alex owns design", "source": "email:2026-01-02", "confidence": 0.95}
  ],
  "policies": [
    {"id": "p_013", "rule": "notify_if_blocker > 48h", "version": 14, "status": "active"}
  ],
  "goals": [
    {"id": "g_003", "name": "Ship MVP", "deadline": "2026-01-15", "progress": 0.6}
  ]
}
THE BOTTOM LINE

Memory is what makes L5 possible. Without persistent memory, no user model. Without user model, no adaptation. Without adaptation, no partnership. Just another stateless assistant pretending to know you.

6
The Journey

Building the future
takes time.

Each phase unlocks new capabilities. We ship when it's ready, not when it's rushed.

Progress
Phase 2 of 5

When all five run continuously, you have a personal OS.

Done

Observe

AI that sees your world — memory, health, context, signals.

What You Get
Your context persists. Your history matters. AI knows you.
Delivered
  • Persistent memory across sessions
  • Health data integration
  • Source tracking for every fact
2
In Progress

Decide

You are here

AI that knows what matters — rules, priorities, authority.

What You Get
Set rules once. Priorities enforced, not suggested.
Working On
  • Rule-based decision engine
  • Priority system
  • Authority checks before action
3

Act

AI that does, not just suggests — execution with accountability.

What You Get
AI acts on your behalf. You approve what matters. Proof of every action.
Milestones
  • Approval workflows
  • Action receipts
  • Supervised execution
4

Verify

AI that confirms results — outcome tracking, feedback loops.

What You Get
Know if actions achieved goals. No dropped balls. Automatic follow-up.
Milestones
  • Outcome tracking
  • Goal-action linking
  • Success/failure detection
5

Evolve

AI that gets better — policy evolution from outcomes.

What You Get
Rules evolve from results. AI learns what works. Smarter over time.
Milestones
  • Policy evolution from outcomes
  • Pattern detection
  • Adaptive behavior

Want to follow the journey?

Stay connected →
7
The Foundation of Trust
Privacy Trust Delegation Partnership

Privacy is not a feature — it is an architectural requirement for S > 0. Without it, symbiosis is impossible.

01 // Core Principles

Our Commitment

Local-First

Your data lives on your devices by default. Cloud sync is opt-in and encrypted.

No Monetization

We do not sell, rent, or trade your personal data. Ever.

Full Provenance

Every piece of data has a source, timestamp, and you can trace how it's used.

Revocable

Disconnect integrations and delete data at any time with immediate effect.

Audit Trail

All system actions are logged with receipts — you can verify what ODEI did and why.

What We Never Do

  • Sell your data to third parties
  • Train AI models on your data
  • Share data without explicit consent
  • Store more than necessary
  • Hide how your data is used
  • Make privacy opt-in
02 // Transparency

Data We Process

  • Account data (optional). Email and authentication identifiers only if you create an account.
  • Your content. Notes, tasks, goals, calendar events. Stored locally unless you enable sync.
  • Connected services. With explicit permission: calendar, email, health data. You control connections.
  • System metadata. Minimal diagnostics for reliability. Analytics are anonymized and opt-in only.
  • Working model. ODEI maintains context from data you provide — all with clear provenance.
03 // Your Control

Your Rights

Access

Export all data anytime

Delete

Remove any or all data

Port

JSON, CSV formats

Correct

Fix inaccuracies

Object

Disable any feature

Technical & Legal Details
Security
TLS in transit. AES-256 at rest for cloud-synced data. E2E encryption planned for sensitive vaults. Principle of least privilege. Security audits scheduled prior to public release.
AI Model Interactions
Minimal data transmission to cloud models. No-retention, no-training where supported. Model-agnostic — you choose providers or run locally.
Data Retention
User-controlled. ODEI tracks data age and marks stale information. System logs: 30-day retention.
Children
ODEI is not intended for users under 16. We do not knowingly collect data from children.
Changes
Material changes communicated with advance notice. Continued use constitutes acceptance.
Contact
Privacy & Security: tony@odei.ai

Effective date: December 18, 2025

8
Join the Journey

Ready to own
your AI?

ODEI is in active development. We're building the infrastructure for true human-AI partnership.

This is for
Founders Operators Creators Builders
Request Early Access

We'll respond within 48h

Or connect directly

The future of AI is not assistance — it's partnership.

9