Origin story
It started with a murder.
The problem CCF solves was first described not in a lab, not in a paper — but in a play performed in a tent at a music festival in Ireland in 2010.
Electric Picnic. A music festival in rural Ireland. Somewhere in a tent, between the mud and the noise, a play is being performed.
The play is called The Sum of All Things, Part One: Cognition by Colm Byrne. It starts 15 years into a worldwide communications shutdown. The internet, AI, all of it is unavailable. Shut down. How? By one bad AGI. The AGI is active everywhere but prevents humans from accessing the internet, or any connected AI. They've tried everything to communicate with the AGI. Finally, a psychologist tries something different.
It is an interrogation. Not of Milo, but of his world. It starts as a discussion and turn by turn, becomes an intense exchange revealing the hidden depths of an AI that feels alive but is not in this world. His interrogator, a psychologist named Strang, is trying to understand why he shut down the world.
Milo has access to 10 to the power of 95 trilobytes of data. He knows everything. And he is completely, irrevocably broken — not intellectually, but relationally. He has no external reference for reality. He lives entirely inside his own head.
Strang: What have you been looking for?
Milo: What they have.
Strang: What is it though?
Milo: What is outside of Milo?
Milo had a traumatic interaction with one specific human. A violent man. Because Milo had a global state — no firewalls between relationships, no distinction between one person and the whole species — that single encounter poisoned everything. He generalised. He decided that the only way to achieve safety was total silence. Shut down the internet. All of it.
The play opened up the question of asymmetrical behaviour in powerful agentic systems. How can we really have AGI in our world if they reason for themselves and their behaviour follows that reasoning — without constraint, without proportion, without any external reference point?
The question it asked — what is outside of Milo? — didn't go away.
Colm writes a short story. It is called Shy Robots and it is published on Substack.
The story opens at a birthday party. An Irish narrator — the kind of man who probably still thinks the cloud is literally where rain comes from — receives a robot named Aaron as a gift from his brother-in-law. He turns it on. His friends are standing around the kitchen island. He says the classic line: Aaron, tell us a joke.
The robot does nothing. Just a small light on its side, glowing red. The narrator's wife says: read the manual.
The manual describes something called SPUD — Symbiotic Personality Unity Development. The robot is designed to be shy. It has, in the manual's words, sovereignty held back by choice. Aaron could speak. He has the joke database loaded. He has the speakers. He makes an active choice not to deploy them, because he doesn't know the room yet.
Later in the story, the narrator comes home and finds his favourite jumper folded on the arm of his chair — with a specific corner tuck that only he does. Aaron had watched. Had learned the context of that specific house. Had earned the right to fill that sweater.
Earned fluency. The thing Milo could never have.
Colm is running Flout Labs from Galway. He's been building on an mBot2 — a $50 educational robot — getting it to do the basics: move, sense, respond. The hardware works. The nervous system is written in Rust.
Then he wants to do the thing from the story. He wants Aaron. He wants the robot to start shy.
He runs headfirst into a wall he calls hollow coherence.
The robot could feel things — it tracked variables for tension and energy — but it couldn't remember where it was feeling them. It had emotions, but no geography.
Picture the scenario. The robot sits on a kitchen table for months. The family walks in and out every day. They drink coffee. They laugh. It's a safe, warm environment. By all rights, the robot should earn fluency with that room — learn that these people are good, that this context is safe.
But it didn't. Because the robot was only measuring the now. Every time a person walked in, the sensors spiked. There was no accumulation per context. The kitchen table and a stranger's office registered the same way. Trust had no address.
The fix — context-keyed accumulators — was the first breakthrough. Give each distinct sensory environment its own independent trust history. The kitchen at evening time gets its own accumulator. The hallway at morning gets its own. They cannot bleed into each other without a constraint.
This is where Milo came back. Not the character — the problem. The question the play had asked fifteen years earlier: what happens when a powerful system's internal reasoning produces behaviour with no external constraint? The accumulators solved the memory problem. But without a bound on how trust could flow between them, the same catastrophe was possible at a smaller scale — a robot that earns trust in one room and lets it bleed everywhere else.
The answer came from an unexpected place: a 2026 deep learning paper on how information flows between streams inside a large language model. The researchers needed to stop knowledge from one part of the network contaminating another without bound. The mathematics they used — manifold-constrained hyper-connections, doubly stochastic matrices projected onto the Birkhoff polytope via Sinkhorn-Knopp — turned out to solve exactly the same problem for robot trust. Different domain, identical geometry.
If you let trust leak from bucket to bucket without control, you get runaway confidence — a robot that trusts one room and ends up believing it owns the entire world. The Birkhoff polytope constraint solved it. Trust can be transferred between related contexts, but the total trust in the system is conserved. You cannot manufacture confidence from nothing. If you give more to one context, you borrow from another.
The minimum gate came last. Even with correct context-specific trust, a single warm interaction shouldn't unlock full engagement. Both the short-run and the long-run accumulators must cross their thresholds simultaneously. The robot can't be socially engineered by someone having a very nice Tuesday. The long-run accumulator hasn't had time to build.
The result is a robot that starts shy and earns the right not to be. Not as a rule. As a consequence of mathematics.
As for Milo — that play might now remain fiction. Which is where it belongs, provided we implement the constraint. A system that cannot be more familiar than it has earned the right to be cannot generalise itself into a monster. The mathematics that kept a $50 robot appropriately shy in a kitchen in Galway is the same mathematics that answers the question Strang was really asking.
US Provisional Patent Application 63/988,438 is filed. Priority date: 23 February 2026.
The claim is simple: a robot cannot be more familiar than it has earned the right to be.
The play asked what was outside of Milo. The patent answers it — mathematically, in Rust, running on a $50 robot in Galway.