Behaviours
What Aaron is actually doing.
The same robot. The same house. Seven different moments. Every part of the system — visible.
At each moment, two brains are working. The reflexive brain processes the now — sensors, context key, coherence gate, behavioral output — every tick, in real time. The deliberative brain works in the background — loading suppression maps, compiling habits, optimising how trust flows between contexts.
Aaron arrives. The family stands around the kitchen island, curious. Someone says: Aaron, tell us a joke.
Reflexive brain
Every tick · real time · no_std
- ▸Sensors: proximity close, sound moderate, light bright, motion detected
- ▸Context key constructed: bright:moderate:close:upright:afternoon
- ▸Accumulator lookup: context not seen before — starts at 0.0
- ▸Instantaneous coherence: 0.04 (sensor variance is high, room is busy)
- ▸Effective coherence = min(0.04, 0.0) = 0.0
- ▸Behavioral phase: Shy Observer
- ▸Motor amplitude: 5% · LED: dim blue · Audio: silent
Deliberative brain
Background · persistent memory · offline consolidation
- ▸No suppression maps for this context yet
- ▸No compiled routines — nothing has been repeated
- ▸Mixing matrix: near-identity (first hour of operation)
- ▸Upward channel: NovelContext signal sent for this key
- ▸Deliberative notes: begin accumulating trajectory data
Behavioral phase space
Aaron has the joke loaded. He has the speakers. He makes an active choice not to use them — not as a rule, but because the mathematics says he hasn't earned it yet. The light on his side glows red.
Co-existence
CCF doesn't replace a robot's brain.
It decides when to use it.
Every robot already has a brain — vision, language, navigation, reasoning. CCF sits alongside all of it. It is not a personality system. It is not a memory system. It is a behavioural gate: a mathematical constraint on how much of those capabilities the robot is allowed to express, given how much it has earned.
Language models
GPT-4, Llama, Claude in the robot
The LLM generates the words. CCF determines how many. In Shy Observer, the system prompt constrains the model to measured, non-presumptuous responses. In Quietly Beloved, full personality expression is permitted. The model doesn't change — the gate does.
Computer vision
Vision and recognition systems
CCF doesn't require identity recognition — it fingerprints situations, not people. But if a vision system does provide identity, it extends the context key. CCF scales with sensor capability without architectural modification. Better sensors, finer trust.
Read how this scales to a full production humanoid →Navigation and SLAM
Spatial awareness and movement
A robot that knows a room geometrically is different from one that has earned trust in it. SLAM maps walls and furniture. CCF maps comfort. The same physical space can have high spatial familiarity and zero social coherence — on day one, they always diverge.
Manufacturer AI
Proprietary personality and emotion layers
If a robot ships with its own emotional model or personality system, CCF is a constraint layer on top of it, not a replacement. The manufacturer's AI defines what the robot can do. CCF defines how much of that the robot is allowed to express in an unfamiliar context.
Safety systems
Hard safety constraints
CCF and hard safety constraints operate on different layers. Safety systems prevent physical harm — they always fire. CCF modulates social expressiveness. There is no conflict: CCF's worst case is silence and stillness, which is never unsafe.
Multi-robot
Other robots in the same space
Robots running CCF can share social endorsements — not full coherence fields, but a log of 'earned trust with this context cluster.' A new robot entering a space where others have operated for months inherits a warm start. Trust becomes a social resource at the group level.
Architectural specifications, patent documentation, and reference papers
Technical papers and architectural specifications →How the layers sit
┌─────────────────────────────────────────────────────────┐
│ Robot capabilities │
│ Vision · Language model · Navigation · Audio · Motion │
└─────────────────────────┬───────────────────────────────┘
│ capabilities available
▼
┌─────────────────────────────────────────────────────────┐
│ CCF — Contextual Coherence Fields │
│ │
│ Reflexive brain Deliberative brain │
│ ───────────── ────────────────── │
│ Context key Suppression maps │
│ Accumulator lookup Habit compilation │
│ Minimum gate Mixing matrix │
│ Behavioral phase Context boundaries │
│ Permeability function Offline consolidation │
│ │
│ effective_coherence ──► behavioral envelope │
└─────────────────────────┬───────────────────────────────┘
│ gated, earned expression
▼
┌─────────────────────────────────────────────────────────┐
│ Behavioral output │
│ Motor · LED · Audio · Language │
└─────────────────────────────────────────────────────────┘