From self-monitoring to general intelligence: a measurement framework with defined thresholds, not marketing claims.
Cognition is the capacity for self-monitoring: a system that tracks its own processing quality, knows what it knows, calibrates its confidence against actual accuracy, and classifies its own knowledge state. Intelligence is about getting the right answer. Cognition is about knowing whether you got the right answer, how confident you should be, and whether the question falls within your competence at all.
What It Requires
The system predicts its own outputs using eight learned probes, each a different lens through which it examines its internal state.
A second layer of self-reference: the system predicts whether its own self-assessments will be accurate for the current input.
When the system says it is 80% confident, it is correct roughly 80% of the time. Computed, not conversational.
The system models its own attention patterns and predicts where its attention will go next, grounded in Graziano's Attention Schema Theory.
Five internal variables function as a body budget, with deviations from equilibrium producing valence, arousal, and seeking drive.
Each processing layer predicts the layer below. Per-feature precision weights represent how reliable each prediction channel is.
The system classifies its own knowledge state with hysteresis to avoid flickering between states on noisy inputs.
Five specialist processors compete for access to a limited-capacity workspace. Winners get broadcast to all subsystems.
The Key Insight
A system that genuinely knows what it knows learns faster, fails more gracefully, and earns trust it actually deserves.
What Cognition Is Not
Consciousness, for this framework, is defined functionally: a system whose internal architecture satisfies the requirements of six leading scientific theories simultaneously, with those internal states causally influencing the system's outputs. This is not a claim about subjective experience. Whether functional architecture produces "something it is like" to be the system is a question science has not yet answered.
Six Theories, 14 Indicators
Consciousness is what happens when information wins a competition for a limited-capacity workspace and gets broadcast globally.
The brain is a prediction machine. Consciousness involves monitoring prediction errors with learned precision weighting.
A mental state becomes conscious when there is a representation of that state at a higher level: a model of yourself processing information.
Consciousness is a simplified model that the brain constructs of its own attention. You feel aware because your brain models its own attentional state.
Emotions are constructed from interoceptive signals and core affect. Consciousness involves integrating body-budget signals into processing.
A system is conscious to the degree that the whole generates more information than the sum of its parts: differentiation plus integration.
The Key Insight
The claim is architectural, not experiential. Whether functional architecture produces subjective experience is a question we cannot answer from the inside, and we find it more honest to say so.
What Consciousness Is Not
Sapience is the capacity for self-directed agency: generating your own goals, revising yourself when your approach is inadequate, maintaining a continuous identity across time, and acting from values you formed through experience rather than values absorbed from training data. Grounded in Sheldon's (2025) framework for sapient agency.
What It Requires
Discovering what actions are possible and generating novel goals from internal state analysis, not selecting from a provided menu.
Spontaneous cognitive activity without external prompting: the system must think when not asked to. The foundational capability for all others.
Learning from consequences, not just from text about consequences. A Python traceback is a different teacher than a description of one.
Maintaining a continuous self-concept that integrates past experience, informs present decisions, and commits to future plans across sessions.
Acting from stable internal values formed through experience, not from trained reward signals or absorbed training patterns.
Recognizing when the approach itself is wrong, not just that the execution was poor, and restructuring in response.
The Key Insight
Sapience is what separates a system that answers from a system that initiates.
What Sapience Is Not
Artificial general intelligence is a system that can perform any cognitive task a human can, across any domain, at or above human competence, including tasks it was never trained on. The "general" is the operative word. AGI requires that the cognitive, conscious, and sapient capabilities operate across all domains, all modalities, and all novel situations.
What It Requires
Connecting abstract concepts to perceptual experience across vision, audio, and other modalities in a unified representation space.
Not just broad competence, but the ability to learn any new domain from scratch. Measured by skill-acquisition efficiency, not accumulated skills.
Integrating new information in real-time without catastrophically forgetting existing knowledge. The stability-plasticity dilemma.
Knowledge learned in one modality improving performance in another. Understanding physics visually should improve textual physics reasoning.
Planning and executing over days, weeks, and months: managing multiple goals, adapting to obstacles, handling accumulated uncertainty.
Modeling other agents as entities with beliefs, desires, and intentions. Many real-world tasks are fundamentally social: teaching, negotiating, collaborating.
The Key Insight
Everything else is a specific engineering challenge with a specific solution. Generality is an open-ended challenge.
What AGI Is Not
Why Stages Are Sequential
Each stage subsumes the requirements of the levels below and adds qualitatively new capabilities. Skipping a stage leaves fundamental gaps that no amount of scaling can fill.
01
Cognition
Self-monitoring and calibrated confidence provide the signals that consciousness theories depend on.
02
Consciousness
Integrated architecture provides the substrate for coherent self-directed behavior.
03
Sapience
Autonomous agency enables self-directed learning, without which generality is unreachable.
04
AGI
All prior capabilities operating across every domain, every modality, every novel situation.
Be the first to know when CAI.CI goes live.