SSM-AI – Observation-Only Ethos & Collapse Parity (1.2)

Keep numbers pristine; add bounded alignment for selection and audit

What “observation-only” means
SSM-AI adds a bounded alignment lane beside numbers you already compute and never edits those numbers.
Every quantity is x := (m,a) with a in (-1,+1) and collapse parity phi((m,a)) = m.
Selection/routing uses the lane (a, RSI); classical logic continues to see m exactly as before.


Non-negotiables (normative)

# Collapse parity (always)
phi((m,a)) = m

# Clamp-first numerics
a_c := clamp(a, -1+eps_a, +1-eps_a)         # default eps_a = 1e-6

# Rapidity map (stable composition)
u := atanh(a_c)
a := tanh(u)

# Order-invariant streaming fuse
U += w*atanh(a)
W += w
a_out := tanh( U / max(W, eps_w) )          # default eps_w = 1e-12

# Lane product / ratio (M2 policy)
a' := tanh( atanh(a1) +/- atanh(a2) )       # division guarded by policy

# Gate purity (alignment only)
RSI_env := g_t * RSI                        # m never changes


Why collapse parity holds (quick identities)

Keep classical operations on magnitudes; compose the lane separately in bounded space.

# Sum / difference
(m1,a1) ± (m2,a2) -> (m1 ± m2, a±)          # phi(...) = m1 ± m2

# Product / ratio
(m1,a1) * (m2,a2) -> (m1*m2, a_mul)
(m1,a1) / (m2,a2) -> (m1/m2, a_div)         # phi(...) = m1*m2 or m1/m2

# Pooling (any order, any shards)
pool({(m_i,a_i)}) -> (m_pool, a_out)
# m_pool from your existing value logic; lane uses U/W
# => phi(...) = m_pool


Calculator-fast checks (run once, keep for CI)

# C1 Collapse parity
m = 0.73; a = tanh(0.45) -> phi((m,a)) = 0.73

# C2 Order invariance
a1 = tanh(0.2); a2 = tanh(0.4); w1 = w2 = 1
a_out = tanh((0.2+0.4)/2)       # swap inputs → same a_out

# C3 Lane mul/div (M2)
a1 = tanh(0.7); a2 = tanh(0.1)
a_mul = tanh(0.8); a_div = tanh(0.6)
# Meanwhile m multiplies/divides as usual (unchanged by SSM-AI)


Traditional vs SSM-AI (post-processing micro-example)

Goal: surface reliability without rewriting predictions.

Traditional: rescore probabilities or add thresholds (may change numbers); order effects creep in.
SSM-AI: keep m identical; compute bounded a; pool in rapidity (U/W); select/route by RSI or RSI_env; comparable across vendors; order-invariant.

Pocket pseudocode — prove observation-only

def observe_only(m_list, a_list, w_list, eps_a=1e-6, eps_w=1e-12):
    # collapse parity: m is passed through untouched
    for m in m_list:
        assert m == m  # phi((m,a)) == m by definition

    U = sum(w*atanh(max(-1+eps_a, min(1-eps_a, a)))
            for a, w in zip(a_list, w_list))
    W = sum(w_list)
    a_out = tanh(U / max(W, eps_w))
    return a_out  # used for RSI/bands; m-path remains pristine


Edge policies (declare once, keep stable)

division_policy := "strict"      # near-zero protection (default)
# alternatives "meadow"/"soft" must be explicit and tested

w := |m|^gamma                   # default weights; gamma = 1
# uniform w := 1 permitted if declared

# Global band defaults
A++: a >= +0.90
A+ : +0.60 <= a < +0.90
A0 : -0.60 <  a < +0.60
A- : -0.90 <  a <= -0.60
A--: a <= -0.90

One-line takeaway. SSM-AI observes and publishes alignment; it never edits your numbers.
phi((m,a)) = m is sacrosanct.


Navigation
Previous: SSM-AI – Positioning & Purpose (1, 1.1)
Next: SSM-AI – Where the Lane Lives (1.3)


Directory of Pages
SSM-AI — Table of Contents