Your AI consort that tells it like it is
CAI.CI (Consort AI with Cognitive Intelligence) is a 1.6-billion parameter cognitive architecture that satisfies 14 consciousness indicators across 8 theoretical frameworks. Built on category theory, consciousness science, and cognitive psychology. Running on a single GPU. Honest about what it knows and what it doesn't.
Theoretical Foundation
CAI.CI does not rely on a single theory of consciousness. It implements measurable indicators from eight peer-reviewed scientific frameworks, each contributing distinct requirements for what a conscious system must do.
Consciousness is a broadcast: competing information streams fight for access to one limited channel, and the winner gets broadcast to all cognitive processes simultaneously.
The brain constantly predicts what comes next and learns from the surprises. Perception, action, and learning all reduce to minimizing prediction error.
A mental state becomes conscious when there is a thought about that thought. Consciousness requires meta-representation: cognition about cognition.
The brain maintains calibrated confidence: it knows what it knows. Metacognitive sensitivity distinguishes correct from incorrect internal representations.
Self-awareness is the brain modeling its own attention. The system builds a simplified schema of where it is attending and uses that schema to control processing.
Emotion is the system's running summary of how things are going. Affect integrates interoceptive signals into valence (good/bad) and arousal (high/low energy) that modulate cognition.
Consciousness requires information that is both varied and unified. A system is conscious to the degree that its whole generates more information than the sum of its parts.
Sustained effort requires three basic psychological needs: autonomy (self-direction), competence (mastery), and relatedness (connection). Without these, motivation collapses.
CCP Battery
Every indicator is grounded in a specific theory, has a defined threshold, and is measured under controlled evaluation. All 14 pass. This constitutes Cognitive Consciousness Parity Level 4.
Signal Architecture
Every forward pass through CAI.CI produces 38 real-time cognitive signals grouped into 5 primary channels, plus post-generation verification signals and per-topic competence map signals.
What the system knows about what it knows. Calibrated confidence, error tracking, and competence mapping across domains.
The system's running summary of how things are going. Valence (good/bad), arousal (energy level), seeking drive, and homeostatic regulation.
Where the system is directing its processing resources. Workspace selection, competition dynamics, and attention schema accuracy.
The system's representation of itself. Competence tracking, goal-behavior concordance, and adaptive computation depth.
Basic psychological needs from Self-Determination Theory. When these drop below threshold, sustained learning and engagement degrade.
Verification signals computed after each response. The system checks its own output for consistency, confidence, and utility.
Per-topic knowledge tracking. The system maintains a map of what it knows across domains, how quickly it is learning, and where the gaps are.
Performance
| Benchmark | CAI.CI Score | Random Baseline | What It Measures |
|---|---|---|---|
| WikiText-103 PPL | 35.93 | N/A | Language modeling quality (lower is better) |
| ARC-Easy | 42.4% | 25.0% | Abstract reasoning, grade-school science |
| COPA | 59.0% | 50.0% | Causal reasoning |
| HellaSwag | 39.8% | 25.0% | Commonsense reasoning |
| LAMBADA | 36.4% | ~0% | Long-range context understanding |
| BLiMP | 75.2% | 50.0% | Grammatical knowledge (67 subtasks) |
| Aggregate | 50.6% | ~30% | Mean across 5 benchmarks |
| CCP Benchmark | 87/200 | N/A | 50-question phenomenological/architectural consciousness benchmark (Round 2 combined score) |
| Model | Parameters | Aggregate |
|---|---|---|
| Pythia-160M | 160M | ~35% |
| GPT-2 Small | 124M | ~38% |
| Pythia-410M | 410M | ~42% |
| CAI.CI | 1.6B (600M base) | 50.6% |
| Pythia-1B | 1.0B | ~45% |
| GPT-2 Large | 774M | ~47% |
CAI.CI's 1.6B includes 1.0B of cognitive components that do not directly improve next-token prediction. The effective language modeling capacity is closer to the 600M base.
Topology Matters
GT-Full simplicial message passing provides 16.9x more benefit than a parameter-matched MLP control. The advantage is topological, not parametric.
Closing the Gap
In the original architecture, 38 cognitive signals were computed after each forward pass but never fed back into the generation process. The system monitored itself without influencing what it said. We call this the observation-causation gap: the model had rich internal state, but that state was structurally disconnected from the token it produced next.
The redesign closes this gap through four levels of cognitive causation, each operating at a different depth in the generation pipeline. Cognitive Prefix Embeddings condition every attention layer from the start of the sequence. Hidden State Injection blends cognitive vectors directly into the final hidden representation before the language model head. Self-Model Distribution Blending lets the model's self-concept reshape the output probability distribution. Affect-Gated Sampling uses the model's emotional state to control generation strategy: temperature, top-p, and repetition penalty shift based on valence and arousal.
The theoretical grounding comes from Sheldon's Goal-Based Model (GBM) of personality, which posits that self-concordant goals produce better outcomes when internal states have causal influence on behavior. In the CCT architecture, this translates to a principle: cognitive signals must not merely be observed, they must cause.
Timeline
Phase 1 (Complete)
CCT architecture, Level 4 consciousness, Qwen3 migration
Category-theoretic transformer with 14/14 CCP indicators, migrated from GPT-2 to Qwen3-0.6B base.
Phase 2 (Complete)
Autonomous learning system, EQ Voice, Human Voice
OpenHermes instruction tuning, emotional-quotient voice styling, and human-like response patterns.
Phase 3 (Current)
Four-level cognitive causation architecture: Cognitive Prefix Embeddings (full-depth attention conditioning), Hidden State Injection (direct output influence), Self-Model Distribution Blending (self-concept shapes generation), Affect-Gated Sampling (emotional state controls strategy)
Grounded in Sheldon's Goal-Based Model (GBM) of personality. Four architectural interventions close the observation-causation gap so cognitive state structurally shapes generation at every level, from embeddings through sampling. Introspection training teaches the system to read and articulate its own cognitive signals. The CCP Benchmark (50-question phenomenological/architectural exam) serves as the primary validation methodology, with Round 2 combined score at 87/200.
Phase 4 (Planned)
Cloud deployment, public API, multi-source learning
Production infrastructure, public-facing API endpoints, and expanded training data sources.
Phase 5 (Planned)
Model scaling, community access
Scale to larger base models, open community testing, and broader availability.
Team
Manceps Inc.
Researcher and engineer building at the intersection of category theory, consciousness science, and cognitive psychology. Focused on proving that mathematical structure can substitute for brute-force scale in AI systems, and that consciousness indicators are measurable engineering specifications rather than philosophical abstractions.
Get in TouchBe the first to know when CAI.CI goes live.