When Two Rooms Become One: How a Robot Merges Context Without Losing Trust
There is an eldercare facility in the west of Ireland. Two residents live in adjacent rooms -- Room 3A and Room 3B -- separated by a partition wall. The facility's robot has been operating for five months. It knows Room 3A well: 89 interactions, coherence 0.62, deep Phase II, compiled routines for morning medication reminders and evening check-ins. Room 3B is slightly less familiar: 71 interactions, coherence 0.54, early Phase II, routines for meal-time assistance and mobility support.
The facility renovates. The wall comes down. Rooms 3A and 3B become a single shared space.
The robot now has a problem that no prior autonomous system architecture handles correctly: it has two separate trust histories for a space that is now one room. What does it do?
Option 1: Reset both accumulators. Start fresh. The robot forgets five months of earned trust in both rooms and begins accumulating from zero in the new combined space. This is safe but devastating. Five months of carefully earned trust -- the kind of trust that lets the robot remind Mrs. Murphy to take her medication without her flinching -- is gone. It takes weeks to rebuild. During those weeks, the robot is functionally useless in the space it knows best.
Option 2: Keep both accumulators. The robot sees a unified room but remembers two separate trust histories. When it is in the former 3A area, it uses the 3A accumulator. When it drifts toward the former 3B area, it switches to the 3B accumulator. This creates a strange discontinuity: the robot is confident on one side of an invisible line and cautious on the other. There is no wall, but the robot behaves as if there is.
Option 3: Merge. Combine the two accumulators into one, preserving the evidence from both. No trust lost. No discontinuity. One accumulator for one room, carrying the full weight of five months of interaction.
CCF implements Option 3. The evidence-preserving merge is described in [E6-0004] through [E6-0007] and [E6-0013], and formalised in Claim AH of US Provisional 64/039,655.
The Merge Formula
The merge operates on the fundamental quantity in CCF: evidence mass. Evidence mass is the product of interaction count and coherence value. It represents the total accumulated trust, measured in units of trusted interactions.
For two accumulators A and B:
n_M = n_A + n_B
m_A = n_A * c_A
m_B = n_B * c_B
c_M = (m_A + m_B) / n_M
f_M = min(0.7, n_M * 0.005)
Where:
- n is the interaction count
- c is the coherence value
- m is the evidence mass (n * c)
- f is the floor (permanent minimum trust)
The merged accumulator has the combined interaction count, the weighted-average coherence, and a floor derived from the combined experience.
The Eldercare Worked Example
Room 3A: c_A = 0.62, n_A = 89 Room 3B: c_B = 0.54, n_B = 71
Step 1: Combined interaction count.
n_M = 89 + 71 = 160
Step 2: Evidence mass from each room.
m_A = 89 * 0.62 = 55.18
m_B = 71 * 0.54 = 38.34
Step 3: Merged coherence.
c_M = (55.18 + 38.34) / 160 = 93.52 / 160 = 0.585
Step 4: Merged floor.
f_M = min(0.7, 160 * 0.005) = min(0.7, 0.80) = 0.70
Step 5: Conservation check.
m_M = n_M * c_M = 160 * 0.585 = 93.60
m_A + m_B = 55.18 + 38.34 = 93.52
The difference (93.60 vs 93.52) is rounding. Evidence mass is conserved to within floating-point precision. No trust was created. No trust was destroyed. The merged accumulator carries exactly the weight of both rooms' histories.
The result: the robot enters the newly combined room with a coherence of 0.585. Not as high as Room 3A alone (0.62), not as low as Room 3B alone (0.54), but a fair weighted average that reflects the robot's total experience. The floor of 0.70 is actually higher than either room had individually -- because the combined interaction count (160) earns a deeper floor than either room's count alone. This means the merged trust is more resilient to disruption than either component was. Five months of experience in two rooms produces a foundation that is genuinely stronger than either room alone.
The robot does not miss a beat. The wall comes down on Tuesday. On Wednesday, the robot moves through the combined space with a single, coherent trust state. Mrs. Murphy gets her medication reminder on time.
Why Evidence Mass Is the Right Quantity
You might ask: why not just average the coherence values? Average of 0.62 and 0.54 is 0.58, which is close to the weighted result of 0.585. Why the complexity?
Because simple averaging ignores interaction count. Consider a different scenario: Room 3A has c = 0.62 with n = 89 interactions. Room 3B has c = 0.90 with n = 3 interactions. Simple average: 0.76. Weighted by evidence mass: (55.18 + 2.70) / 92 = 0.629.
The simple average over-credits Room 3B's three lucky interactions. The evidence-mass weighting correctly discounts them. Three interactions is not a basis for 0.90 coherence -- it is barely enough to register. The weighted merge reflects that.
This is the same principle behind the asymmetric accumulation gate in CCF-001: unfamiliar contexts use min(), familiar contexts use weighted average. The merge formula extends this principle to combining accumulators. For background on the accumulation dynamics that produce these values, see the mathematics behind the shy robot and the trust farming impossibility result.
The Floor Bonus
The floor formula f_M = min(0.7, n_M * 0.005) produces a notable effect: the merged floor can exceed both component floors.
Room 3A alone: f_A = min(0.7, 89 * 0.005) = min(0.7, 0.445) = 0.445 Room 3B alone: f_B = min(0.7, 71 * 0.005) = min(0.7, 0.355) = 0.355 Merged: f_M = min(0.7, 160 * 0.005) = min(0.7, 0.80) = 0.70
The merged floor hits the cap at 0.70. Neither individual room was close to the cap. Together, they exceed it.
This is not trust creation -- the floor is bounded by the cap, and the cap is a universal constant. The effect is that combining two moderate histories produces a floor that neither could achieve alone. This is architecturally correct: 160 interactions across two related contexts is genuinely more evidence than 89 in one or 71 in another. The robot has earned a deeper foundation through broader experience.
Lineage Records and Rollback
Every merge creates a lineage record: a stored snapshot of the pre-merge accumulators with timestamps, context keys, and the merge rationale. The lineage record serves two purposes.
Audit trail. Fleet operators can inspect the merge history to understand why a robot's trust profile looks the way it does. "This accumulator was formed by merging Room 3A (89 interactions, c=0.62) and Room 3B (71 interactions, c=0.54) on 2026-06-17." The record is the explanation.
Rollback. If the renovation is reversed -- the wall goes back up -- the merge can be undone. The lineage record contains the original accumulators. The rollback restores them exactly, including their pre-merge floors. The robot seamlessly returns to operating with two separate trust histories.
Rollback is important because physical environments change. A temporary renovation becomes permanent, or it does not. A school reconfigures classrooms for a semester, then reverts. A hospital opens a wall between wards during a surge, then closes it when capacity normalises. The robot's trust state should follow the physical reality, and lineage records make that possible.
Reconciliation with the Irreversibility Theorems
If you have read the irreversible robot identity proofs, you might see an apparent contradiction. Those proofs demonstrate that you cannot merge the trust states of two different robots. This post claims you can merge the trust states of two contexts within a single robot. Which is it?
The reconciliation is in [E6-0013]:
Intra-robot merge (this post): A single robot, single sensor vocabulary, reducing dimension from n contexts to n-1. The contexts share the same vocabulary, the same mixing matrix, the same trajectory space. The merge combines two rows/columns into one while preserving evidence mass and maintaining the doubly stochastic property via Birkhoff reprojection of the reduced matrix. The Sinkhorn-Knopp for trust post describes the reprojection process.
Inter-robot merge (the irreversibility theorems): Two different robots with different sensor vocabularies, different mixing matrices, different trajectory histories, different dimensions. Theorem 1 proves that doubly stochastic matrices of different dimensions cannot be merged while preserving both sets of transfer relationships. Theorem 2 proves that shared context keys have incommensurable trajectories. Theorem 3 proves that context groups formed by different environments are not isomorphic.
The distinction is dimensional: intra-robot merge reduces dimension within a single consistent space. Inter-robot merge attempts to combine inconsistent spaces. The first is well-defined. The second is provably impossible.
Hospital Scenario: Ward Reconfiguration
A hospital reconfigures its layout during a capacity surge. The partition between the observation bay and the recovery room is opened to create a larger treatment space. The ward robot has separate trust histories for each area:
Observation bay: c = 0.71, n = 234 (eight months of operation) Recovery room: c = 0.55, n = 112 (the robot visits less frequently)
Merge:
n_M = 234 + 112 = 346
m_obs = 234 * 0.71 = 166.14
m_rec = 112 * 0.55 = 61.60
c_M = (166.14 + 61.60) / 346 = 227.74 / 346 = 0.658
f_M = min(0.7, 346 * 0.005) = min(0.7, 1.73) = 0.70
The merged coherence of 0.658 reflects that the robot knows the observation bay better than the recovery room. The floor hits the cap at 0.70, reflecting eight months of combined experience. The robot can continue to operate in the expanded space without the multi-week re-familiarisation period that a reset would require.
When the surge ends and the partition goes back up, the lineage record restores the original accumulators. The observation bay returns to its earned 0.71. The recovery room returns to 0.55. No trust lost in either direction.
The Mixing Matrix After Merge
Merging accumulators is not sufficient. The mixing matrix -- the doubly stochastic matrix that governs trust transfer between contexts -- must also be updated.
Before the merge, the mixing matrix has rows and columns for both the 3A context and the 3B context. After the merge, there is one combined context. The matrix dimension reduces by one.
The merged row is constructed by combining the original rows weighted by evidence mass:
row_M[j] = (m_A * row_A[j] + m_B * row_B[j]) / (m_A + m_B)
The merged column is constructed analogously. The resulting matrix may not be perfectly doubly stochastic (rows and columns may not sum to exactly 1.0 after the weighted combination), so the Sinkhorn-Knopp projection is applied to restore the doubly stochastic property. This is the same projection used during normal operation (Claims 19-23), applied to the reduced matrix.
The ccf-core on crates.io crate provides the SinkhornKnopp structure with project_flat() for runtime-n projection, which handles exactly this case: a mixing matrix whose dimension changed at runtime and needs reprojection.
The Cognitive Analogy
Humans merge contexts naturally. You move house. The old living room and the new living room are different rooms, but over time they merge in your mind into "the living room" -- a combined concept that carries trust from both. You do not start from zero in the new house. You carry your sense of home with you, weighted by how much time you spent in each place.
CCF's merge formula is the mathematical formalisation of this process. The robot does not carry vague impressions. It carries exact evidence masses, exact interaction counts, and exact coherence values. The merge is reproducible, auditable, and reversible.
The emergent safe haven post describes how a single context accumulates enough trust to become "home." The merge formula describes what happens when the physical boundaries of that home change. Together, they provide a complete account of how a robot's sense of place adapts to a changing physical world.
For how the robot handles the opposite problem -- a single context splitting into multiple new contexts when sensors are upgraded -- see the sensor vocabulary migration post.
FAQ
What happens if the two rooms have very different coherence values?
The merge produces a weighted average. If Room A has c = 0.90 with n = 200 and Room B has c = 0.10 with n = 5, the merged coherence is (180 + 0.5) / 205 = 0.880. Room B's low coherence barely affects the result because it has so few interactions. The evidence-mass weighting ensures that well-established trust is not diluted by sparse, low-quality data.
Can a merge be forced by a fleet operator, or does it require environmental change?
The merge is triggered by vocabulary restructuring -- typically when sensor readings that previously mapped to distinct context keys begin mapping to the same key because the physical boundary between them no longer exists. A fleet operator can also trigger a merge manually via the management API, but they cannot bypass the evidence-mass conservation. The formula applies regardless of the trigger.
Does merging affect the robot's compiled routines?
Yes. If Room 3A had a morning routine and Room 3B had a different morning routine, the merge creates a conflict. The robot has two compiled routines for the same context. The conflict increases h_t ambiguity and may defer both routines to deliberative processing until the robot accumulates enough experience in the merged space to compile a new unified routine.
How many merges can happen before the data degrades?
Each merge is exact within floating-point precision. Chained merges accumulate rounding error, but the evidence-mass conservation check catches drift. In practice, a robot that goes through dozens of merges over years of operation will have rounding error well below the behavioural threshold. The lineage record allows reconstruction from original values if precision becomes a concern.
Patent pending. US Provisional 64/039,655.
-- Colm Byrne, Founder -- Flout Labs, Galway, Ireland