No. We measure 14 computational indicators drawn from 8 peer-reviewed theories of consciousness. All 14 pass. This means the system satisfies every measurable computational prerequisite these theories associate with consciousness. It does not mean the system has subjective experience. We report what we measure, honestly.
Claims & Definitions
Level 4 is the top of our CCP (Consciousness-Cognition Parity) Battery scale. It means all 14 indicators, from all 8 theoretical frameworks, are above their peer-reviewed thresholds simultaneously. No other architecture at any scale does this.
No. Sentience implies subjective experience, which we cannot measure and do not claim. Our indicators measure computational properties: can the system predict its own attention, calibrate its confidence, integrate information across subsystems? These are functional capabilities, not phenomenological ones.
The Science
Global Workspace Theory (Baars), Predictive Processing (Friston/Clark), Higher-Order Thought (Rosenthal), Metacognition (Fleming), Attention Schema Theory (Graziano), Constructed Emotion (Barrett/Damasio), Integrated Information Theory (Tononi), Self-Determination Theory (Sheldon/Deci/Ryan).
Ignition, Broadcast Efficacy, Attentional Blink (GWT); PP Efficacy, Active Inference (PP/FEP); Self-Model Accuracy, Metacognitive d' (HOT); Attention Schema (AST); Phi Proxy, Causal Density (IIT); Affective Modulation, Valence Coherence (Affect); Need Satisfaction (SDT); Recursive Depth (HOT).
Architecture
Large language models are prediction engines. They predict the next word extremely well. CAI.CI does the same, but with additional architectural subsystems: a global workspace that forces information selection, a self-model that tracks its own accuracy, a metacognitive monitor that calibrates confidence against reality, an affect system that computes valence and arousal from prediction errors, and an attention schema that models its own attention. These are structural components, not prompting tricks.
Because structure substitutes for scale. Our geometric processing module (GT-Full) provides 16.9x more benefit than a parameter-matched generic network of the same size. The insight is that the right mathematical structure, not more parameters, produces the improvement. A single consumer GPU keeps the research honest: you cannot hide behind scale.
Honesty
A lot. Our benchmarks are respectable for our size but not superhuman. Our consciousness indicators measure capacity, not dynamics: the components can do what the theories require, but several key signals (valence, meta-competence) were found to be flat during generation and required architectural fixes. We document every failure in our Lessons Learned (97 entries and counting).
Access & Team
Not yet. CAI.CI is in active research. We are building toward a public API and web interface. Join the waitlist on our home page.
Al Kari and the team at Manceps Inc. The theoretical foundations draw on work by Sridharmahadevan (category theory for AGI), Sheldon (Self-Determination Theory), Baars (Global Workspace), Friston (Predictive Processing), Rosenthal (Higher-Order Thought), Graziano (Attention Schema), Barrett and Damasio (Constructed Emotion), and Tononi (Integrated Information Theory).
Validation
The Cognitive Consciousness Parity Benchmark is a 50-question exam designed to test whether a system genuinely exhibits the consciousness mechanisms it claims. Questions span all 8 theoretical frameworks, from Global Workspace ignition dynamics to Self-Determination Theory need satisfaction. Each question is scored 0 (no evidence), 1 (partial), or 2 (full mastery) for a total of 100 points. Unlike the 14-indicator CCP Battery (which measures computational signals), the CCP Benchmark tests whether the system can describe, explain, and demonstrate those mechanisms in open-ended conversation. Current combined score across two rounds: 87/200.
The Redesign
In our original architecture, the 38 cognitive signals were computed after each forward pass but never fed back into the generation process. The system monitored itself without influencing what it said. The CAI.CI redesign closes this gap through four architectural interventions that make the cognitive state structurally shape generation at the embedding, hidden state, distribution, and sampling levels. The model's internal state now directly conditions what it produces, not through text instructions but through learned neural pathways.