Architectural Extensions
Four extensions to the core CCF architecture. Hierarchical mixing for embedded scale. Environmental identity with irreversibility proofs. Privacy-responsive trust that increases when invoked. Emergent home context without programmed coordinates.
What it is
The core CCF architecture (Prov 1) scales to about 64 contexts before the O(n²) Sinkhorn-Knopp projection becomes expensive on embedded hardware. Extension 1 introduces hierarchical block-diagonal mixing: contexts are clustered, each cluster gets its own small doubly stochastic matrix, and a separate inter-cluster matrix handles cross-cluster influence. Cost drops to O(k² + Σn⊂i;²), enabling hundreds of contexts on ARM Cortex-M.
Extension 2 proves that identity is not assigned—it emerges from environmental interaction and is mathematically irreversible. Three irreversibility theorems show that robots in different environments develop formally incompatible identities: their mixing matrices have different dimensions, their vocabularies diverge, and their accumulated coherence landscapes become non-mergeable.
Extension 4 resolves the privacy/personalisation tradeoff. When a user invokes privacy mode, a hardware relay disconnects sensors, a null-content trust event fires, and a rarity-scaled increment is applied—rare privacy requests produce larger trust increments. Privacy invocation literally increases trust. Extension 5 lets the robot find its charging station without GPS or beacons. Battery depletion couples to tension via a monotonic transfer function, driving preference for the highest-coherence context, which over time becomes the charging station.
Problems it solves
Scaling to hundreds of contexts on embedded hardware
Flat Sinkhorn-Knopp is O(n²). With 200 contexts that's 40,000 operations per tick. Hierarchical mixing with 10 clusters of 20 reduces this to 100 + 10×20² = 4,100 operations. A 10x reduction that fits within ARM Cortex-M timing budgets.
Proving identity is unique and irreversible
Two robots deployed in different wards develop different sensor vocabularies, different mixing matrix dimensions, and different coherence landscapes. Theorem 1 proves their mixing matrices are dimensionally incompatible. You cannot transplant one robot's identity into another.
Resolving the privacy/personalisation tradeoff
Traditional systems face a binary choice: collect data for personalisation or respect privacy at the cost of quality. CCF's privacy mode increases trust because the rarity of privacy invocations is itself a positive signal. The robot gets better at serving you precisely because it can't see you.
Finding home without GPS or beacons
The robot doesn't know where its charger is. But it knows which context has the highest accumulated coherence. As battery depletes, tension rises, and the minimum gate favours the highest-trust context. The charger happens to be there because that's where the robot spends the most stable time. Home emerges from the mathematics.
Real-world scenarios
Privacy mode in a hospital ward
A patient asks the companion robot for privacy during a medical examination. The robot’s hardware relay physically disconnects its microphone and camera within one sampling period. A null-content trust event fires. Because this patient rarely requests privacy, the rarity-scaled increment is large—trust actually goes up. When privacy mode ends, the robot resumes with higher coherence than before. The patient gets better service precisely because the robot respects boundaries.
Robot finding its charger without a beacon
Battery drops to 20%. Tension rises via the monotonic transfer function. The minimum gate now strongly favours the highest-coherence context—which happens to be the alcove next to the charging station, because that’s where the robot spends quiet, undisturbed time every night. The robot navigates there through coherence gradients alone. No programmed coordinates, no beacons, no special dock discovery protocol. Home is a mathematical consequence of where the robot has been happiest.
Two robots, two wards, incompatible identities
Robot A operates in a paediatric ward with high noise, bright lights, and frequent movement. Robot B operates in a palliative care unit with low noise, dim lights, and slow routines. After six months, their sensor vocabularies have diverged: A has fine-grained noise classifications, B has fine-grained light classifications. Their mixing matrices are different dimensions. Theorem 1 proves you cannot merge them. Each robot is the product of its own environmental history—identity is earned, not assigned.
What the claims cover
Claims A--C -- Hierarchical Mixing
Block-diagonal intra-cluster plus inter-cluster mixing. Each independently projected onto the Birkhoff polytope. Computational cost O(k² + Σnᵢ²) versus O(n²) for flat mixing.
Claims D--H -- Environmental Identity Formation
Ontogenetic identity from environmental interaction. Three irreversibility theorems: mixing matrix dimensional incompatibility, vocabulary divergence, coherence landscape non-mergeability. Identity fingerprint for fleet monitoring.
Claims I--L -- Privacy-Responsive Trust
Hardware sensor disconnection within one sampling period. Null-content trust event with zero content bytes. Rarity-scaled trust increment. Three technical effects: reduced bandwidth, increased resilience, hardware-verified disconnection.
Claims M--O, Z -- Emergent Home Context
Emergent safe-haven formation without programmed coordinates. Battery state-of-charge coupled to tension. Monotonically increasing transfer function drives return-to-charge through coherence gradients.
Applications
Licensing enquiries
CCF is released under BSL-1.1 — free for evaluation and non-commercial use. Commercial licensing is available from Flout Labs.
cbyrne@floutlabs.com