Any robot or agent can now have emergent social behaviour.
Not scripted. Not rule-based. Earned.
CCF is a patent-pending mathematical framework that gives robots and AI agents a provable sense of context — so trust can accumulate the way it does in nature.
The framework
Three primitives. One provable result.
Context-keyed accumulators
Trust that remembers where it was earned
Every interaction is tagged with a context key — who, where, what time of day. Coherence accumulates independently per context, so the robot that's calm in the kitchen doesn't carry that warmth into a stranger's office.
Minimum gate
Both must be true or I stay reserved
The gate requires both instantaneous coherence and long-run context coherence to cross a threshold simultaneously. This eliminates false positives: a single warm encounter doesn't unlock full engagement.
Min-cut boundary
This room feels different from that room
A graph-partitioning algorithm continuously separates contexts by their interaction signatures. Trust built in one environment naturally bounds its influence on another — mathematically, not by rule.
Existence proof
The $50 robot that earned trust
We ran CCF on an mBot2 — a $50 educational robot with a Bluetooth connection and six sensor readings. After real interactions in a home environment, the robot developed measurably different social phases per context: more cautious with strangers, more engaged with familiar family members.
No training data. No scripted states. The trust emerged from the math.
Run it yourself →Social phase model
Every agent starts as a ShyObserver
Social phase is determined by two axes: instantaneous coherence and accumulated context coherence.
Ready to give your robot a sense of self?
CCF is available as a Rust crate on crates.io. Pure no_std, no heap, runs on embedded hardware down to $50 robots.