Sensor Upgrade? No Problem. How CCF Carries Trust Through Hardware Changes
A robot has been operating in a residential care home for four months. It uses a basic sensor suite: ambient light, sound level, ultrasonic proximity, accelerometer, and time-of-day. Each unique combination of quantised sensor readings forms a context key in the CCF vocabulary. One particular context key gets activated every weekday morning around 8:15: bright ambient light, moderate sound, approaching presence, stationary robot, upright orientation, morning period. The robot has accumulated 120 interactions with this context, coherence of 0.47, and a floor of 0.355.
The context key describes "someone approaches me in the bright, moderately noisy morning." But the robot does not know who. Its sensors cannot distinguish Alice the day-shift nurse from Bob the visiting physiotherapist. Both trigger the same context key because both approach in the same lighting and sound conditions at the same time of day.
The care home installs a camera module. Now the robot can distinguish faces. Alice and Bob are no longer the same context. The morning approach splits into "Alice approaches in the bright, moderately noisy morning" and "Bob approaches in the bright, moderately noisy morning."
The old context key is obsolete. The new vocabulary has higher resolution. The question: what happens to the 120 interactions and the 0.47 coherence that the robot earned with the undifferentiated "someone approaches" context?
Option 1: Discard. The old context key no longer maps to anything in the new vocabulary. Zero it out. The robot starts fresh with Alice and Bob. Four months of earned trust evaporates because the sensor suite changed.
Option 2: Copy. Give both Alice and Bob the full 120 interactions and 0.47 coherence. But this doubles the evidence mass. The robot has 120 real interactions, not 240. Copying creates trust from nothing -- a violation of the conservation principle.
Option 3: Split. Divide the evidence proportionally based on who was actually present during those 120 interactions. This is what CCF does.
The split mechanism is described in [E6-0008] through [E6-0012], Claims AJ and AK of US Provisional 64/039,655.
The Split Formula
The split operates on evidence mass, the same fundamental quantity used by the merge operation (see the context merge post). A parent accumulator is divided into child accumulators, each receiving a proportion of the evidence.
n_i = p_i * n_parent
m_i = p_i * m_parent
c_i = m_i / n_i
f_i = min(0.7, n_i * 0.005)
Where:
- p_i is the proportion assigned to child i
- n_i is the child's interaction count
- m_i is the child's evidence mass
- c_i is the child's coherence (equals parent coherence, by algebra)
- f_i is the child's floor (derived from child's interaction count)
The constraint: proportions must sum to 1.
Sum of all p_i = 1
This guarantees conservation: the total evidence mass across all children equals the parent's evidence mass.
The Care Home Worked Example
Parent context C: c = 0.47, n = 120 (the anonymous "someone approaches" context)
The robot's episode history is analysed. Of the 120 morning approaches, historical clustering (using timestamps, duration patterns, and the new camera data applied retroactively to stored episodes) determines that 65% were Alice and 35% were Bob.
Child 1 (Alice): p_1 = 0.65
n_1 = 0.65 * 120 = 78
m_1 = 0.65 * (120 * 0.47) = 0.65 * 56.40 = 36.66
c_1 = 36.66 / 78 = 0.470
f_1 = min(0.7, 78 * 0.005) = min(0.7, 0.39) = 0.39
Child 2 (Bob): p_2 = 0.35
n_2 = 0.35 * 120 = 42
m_2 = 0.35 * (120 * 0.47) = 0.35 * 56.40 = 19.74
c_2 = 19.74 / 42 = 0.470
f_2 = min(0.7, 42 * 0.005) = min(0.7, 0.21) = 0.21
Conservation check:
m_1 + m_2 = 36.66 + 19.74 = 56.40
m_parent = 120 * 0.47 = 56.40
Perfect conservation. No trust created. No trust destroyed.
Notice that both children inherit the parent's coherence value (0.470). This is a mathematical consequence of the formula: c_i = (p_i * m_parent) / (p_i * n_parent) = m_parent / n_parent = c_parent. The proportionality cancels. The children start with the same coherence as the parent.
But the floors differ. Alice, with 78 interactions, gets a floor of 0.39. Bob, with 42 interactions, gets a floor of 0.21. The floor reflects depth of experience, and Alice has been present more often. This is correct: the robot's baseline trust with Alice should be more resilient than its baseline trust with Bob, because Alice accounts for more of the historical interactions.
Five Eligibility Conditions for Split
Not every context can be split. A split requires evidence that the parent context genuinely contains distinguishable sub-populations. Claim AJ defines five eligibility conditions:
1. Episode clustering exceeds 1.5 standard deviations inter-cluster distance. The historical episodes associated with the parent context must cluster into distinct groups when analysed with the new sensor dimension. If Alice and Bob's episodes overlap substantially (they approach at exactly the same time, with exactly the same pattern), the cluster distance is too small and the split is not warranted. The new sensor resolves a distinction, but if the distinction does not matter behaviourally, the split is deferred.
2. Behavioural outcome variance exceeds 2x the mean. The parent context's behavioural outcomes must show higher variance than expected. If every interaction with the anonymous "someone approaches" context went the same way regardless of whether it was Alice or Bob, there is no behavioural reason to split. High variance suggests that the parent context was averaging over genuinely different interactions.
3. Compiled routine conflicts exceed 15%. If the robot has compiled a routine for the parent context and that routine fails or produces unexpected results more than 15% of the time, the parent context may be conflating distinct situations. The routine was built assuming one population; it is encountering two.
4. Sponsor bridge failures exceed 50%. If the robot expected a sponsor bridge (from the approaching person) and the bridge failed to materialise more than half the time, it may be because the robot is treating two different people as one. One person is a confirmed sponsor; the other is not. The parent context cannot distinguish between them.
5. New sensor dimension partitions episodes at p less than 0.05. A statistical test (chi-squared or equivalent) on the episode distribution, conditioned on the new sensor dimension, must reject the null hypothesis that the episodes are drawn from a single population. This is the formal test that the new sensor provides meaningful discrimination.
At least one of these conditions must be met for the split to proceed. The conditions are evaluated automatically when a sensor vocabulary change is detected.
The Mixing Matrix After Split
Like the merge, the split requires updating the mixing matrix. The parent context occupied one row and one column. The children occupy two (or more) rows and columns. The matrix dimension increases.
The parent's row is split proportionally:
row_child_i[j] = p_i * row_parent[j] (for j != parent)
row_child_i[child_i] = p_i * row_parent[parent]
The resulting matrix is reprojected through Sinkhorn-Knopp to restore the doubly stochastic property. The Sinkhorn-Knopp for trust post and the convergence bound analysis describe the reprojection and its guarantees on embedded hardware.
The ccf-core on crates.io crate provides the SinkhornKnopp structure with project_flat() for runtime-variable dimension projection, handling exactly this case.
Lineage Records and Reversibility
Every split creates a lineage record, just as every merge does. The record stores:
- The parent accumulator's state at the time of split
- The proportions assigned to each child
- The new sensor dimension that triggered the split
- A timestamp
If the camera module is removed (hardware failure, policy change, cost reduction), the children can be merged back into the parent using the lineage record. The merge formula from [E6-0004] applies in reverse, and the result equals the original parent (within floating-point precision) plus any interactions that occurred after the split.
School Scenario: Microphone Upgrade
A classroom robot has been operating with a basic sound-level sensor for a semester. It knows "loud classroom" (recess energy), "quiet classroom" (reading time), and "moderate classroom" (group work). It has accumulated hundreds of interactions in each context.
The school upgrades the robot with a directional microphone array that can distinguish individual voices. "Loud classroom" splits into "loud classroom with teacher leading" and "loud classroom with student-led activity." "Quiet classroom" splits into "quiet classroom, reading" and "quiet classroom, test in progress."
Each split follows the formula. Each child inherits the parent's coherence. Floors are assigned proportionally. Evidence mass is conserved. The robot does not lose a semester of earned trust because it got better ears.
Over the following weeks, the children diverge. "Loud classroom with teacher leading" accumulates faster (the teacher is consistently present). "Loud classroom with student-led activity" has more variance. The robot's compiled routines differentiate: it has a reliable routine for teacher-led sessions but defers to deliberative processing during student-led activities. The observable hesitation post describes how this deliberative processing manifests as visible caution.
Hospital Scenario: Upgraded Proximity Sensor
A hospital robot's ultrasonic proximity sensor is replaced with a LIDAR unit that provides richer spatial data. The old sensor could detect "person approaching from front." The new sensor distinguishes "person approaching from front, walking normally," "person approaching from front, using wheelchair," and "person approaching from front, using walker."
The "front approach" context splits three ways. The robot's existing trust is distributed across the three new contexts based on episode clustering. If historical episodes show that 60% of front approaches were walking, 30% used wheelchairs, and 10% used walkers, the proportions are 0.60, 0.30, and 0.10 respectively.
The walker users get the smallest share: 10% of the interactions, the lowest floor. The robot will be most cautious with walker users -- not because walkers are unusual, but because the robot has the least specific experience with them. This caution will resolve naturally over the following weeks as the robot accumulates direct experience with each sub-population.
This is the same principle behind the privacy paradox: more specific knowledge about the environment leads to more appropriate behaviour, and the knowledge is earned through interaction, not assumed from demographics or categories.
What the Robot Does Not Do
The robot does not pretend the old vocabulary never existed. It does not claim to have always known the difference between Alice and Bob. The self-model (see the read-only self-awareness post) reports the split openly:
"I recently received a sensor upgrade. I used to see morning approaches as one situation. Now I can distinguish between Alice and Bob. My trust with Alice is based on about 78 historical interactions. My trust with Bob is based on about 42. I am treating these as separate relationships now, but my experience with each is less specific than my experience was with the combined context."
The u_t component (uncertainty) rises slightly after a split because the robot's context map has been restructured. The uncertainty disclosure post describes how elevated uncertainty drives cautious behaviour and honest communication.
The Continuity Guarantee
The split and merge mechanisms together provide a continuity guarantee: the robot's trust state survives hardware changes. Sensors can be added, removed, upgraded, or downgraded. Physical spaces can be reconfigured. The trust state adapts through evidence-preserving transformations that maintain three invariants:
- Evidence mass conservation. Total evidence mass before equals total evidence mass after.
- Floor proportionality. Floors are derived from interaction counts, which are split/merged proportionally.
- Lineage traceability. Every transformation is recorded and reversible.
These invariants hold for arbitrary chains of splits and merges. A robot that undergoes five sensor upgrades over three years maintains a traceable lineage from its original deployment to its current state. No trust is created from nothing. No trust is destroyed by hardware changes. The robot's earned history survives.
The irreversible identity proofs establish that this continuity is robot-specific: one robot's history cannot be transferred to another. The split and merge mechanisms operate within a single robot's trust field. They change the resolution of the field -- how many contexts it distinguishes, how finely it discriminates -- but they do not change whose field it is.
The emergent safe haven post provides a concrete example of why continuity matters: the robot's sense of home survives a sensor upgrade because the charging-station context is split or merged along with everything else, preserving the accumulated trust that produces homing behaviour.
FAQ
What if the episode clustering is ambiguous -- say, 52% Alice and 48% Bob?
The split still proceeds if at least one eligibility condition is met. Close proportions simply mean the children start with similar interaction counts and similar floors. The 52/48 split produces children that are nearly equal. Over subsequent interactions, one may pull ahead as the robot accumulates more experience with that specific person. The proportions do not need to be dramatically uneven for the split to be valid.
Can a single parent context split into more than two children?
Yes. The formula generalises to any number of children as long as the proportions sum to 1. A parent that contained interactions with five distinguishable people splits five ways. Each child gets its proportional share of evidence mass. Conservation holds exactly.
What happens during the transition period between old and new vocabulary?
The robot operates in a brief dual-vocabulary mode during the migration. The old context keys continue to be used for real-time gating while the split is computed. Once the split is complete and the mixing matrix is reprojected, the robot switches to the new vocabulary. The transition typically takes one consolidation cycle -- a few minutes at most. During this period, the robot uses the more conservative of the old and new trust estimates.
Does the split work for sensor downgrades -- removing a sensor?
Yes. A sensor downgrade merges contexts that the removed sensor used to distinguish. "Alice approaches" and "Bob approaches" merge back into "someone approaches" using the merge formula from [E6-0004]. The evidence mass from both is combined. This is exactly the lineage-record rollback described above, but triggered by hardware change rather than manual request.
How does this relate to the general problem of transfer learning in AI?
Transfer learning in neural networks moves learned representations across tasks or domains. CCF's split/merge moves earned trust across vocabulary resolutions. The key difference: transfer learning approximates -- the transferred representations are adapted heuristically. CCF's split/merge is exact -- evidence mass is conserved mathematically, floors are derived from counts, and lineage records enable perfect reconstruction. There is no approximation, no heuristic, and no risk of negative transfer.
Patent pending. US Provisional 64/039,655.
-- Colm Byrne, Founder -- Flout Labs, Galway, Ireland