How this lane complements (not replaces) today’s confidence tools
Context (why this section).
You already use confidence signals—entropy, calibrations, ensembles, heuristics. SSM-AI sits beside them as a bounded, order-invariant lane that leaves the underlying numbers unchanged via phi((m,a)) = m. It standardizes selection/routing with a single chooser RSI in (-1,+1) while keeping your existing scores, logits, and probabilities intact.
Entropy-only confidence.
• Typical: unbounded mixes on ad-hoc scales; order effects across batches/streams.
• SSM-AI: clamp first, then map to rapidity and fuse additively:
a_c := clamp(a, -1+eps_a, +1-eps_a)
u := atanh(a_c)
a_out := tanh( (SUM w*u) / max(SUM w, eps_w) )
Result: bounded a_out in (-1,+1), order-invariant by construction.
MC dropout / deep ensembles.
• Typical: retraining or K forward passes; expensive at serving time.
• SSM-AI: drop-in, read-only; one pass. You may still feed ensemble statistics into the lane as a lens:
# example lens from dispersion
e := dispersion_metric / Unit
a := tanh(-c*e) # larger dispersion → lower alignment
The numbers m from each model remain unchanged (phi((m,a)) = m).
Calibrated probabilities (Platt, isotonic).
• Typical: reshape m to look calibrated.
• SSM-AI: never reshapes m. It adds a parallel, bounded signal a and a chooser RSI, so you keep your probability semantics intact and gain auditable routing.
Heuristic rerankers / weighted sums.
• Typical: sensitive to scale/units; hard to compare across vendors.
• SSM-AI: declare a lens once (units, scales), map to a_in, a_out, then rank by single bounded chooser:
RSI := tanh( (SUM w*atanh(a_out) - SUM w*atanh(a_in)) / max(SUM w, eps_w) )
Comparable across prompts/models because the u-space mean normalizes composition.
Synergy (use them together).
• Treat any existing confidence signal as evidence e; convert with a := tanh(±c*e); pool via (U,W); pick by RSI.
• Calibrations remain on m; SSM-AI adds bounded stability for routing, gating, and auditing.
Non-goals (clarity).
• No model surgery. Training/logits/probabilities remain intact: phi((m,a)) = m.
• No hidden smoothing. Bands, clamps, weights, and policies live in a manifest.
• No order dependence. Batch == stream == shuffled by U/W design.
• No silent actuation. RSI_env := g_t * RSI scales alignment only; any control change needs its own safety case.
Pocket drop-in (turn any metric into a lane and rank).
# Given a scalar confidence-like metric q (any scale), declare units once:
e := q / Unit
a := tanh(+c*e) # sign per lens: + raises, - lowers alignment
# Pool across items/providers (order-invariant):
U := SUM w*atanh(a)
W := SUM w
a_out := tanh( U / max(W, eps_w) )
# Use a_out directly or build a two-sided chooser (in/out evidence) → RSI
One-line takeaway.
Keep all your confidence tools; standardize selection with a single bounded lane and chooser that are order-invariant and leave m pristine via phi((m,a)) = m.
Navigation
Previous: SSM-AI – Where the Lane Lives (1.3) — Part 2
Next: SSM-AI – Outcomes to Track on Day 1 (1.5)
Directory of Pages
SSM-AI — Table of Contents