Design small lenses. Make signals comparable.
3) Lens → Align → RSI (Single Chooser)
Idea. Declare a tiny, auditable lens to turn observable evidence into a signed, dimensionless contrast e; map e to bounded alignments; pool in u-space; select by a single bounded RSI. Classical magnitudes remain identical everywhere via phi((m,a)) = m.
3.1 Contrasts e (designing lenses for different AI aspects)
Purpose. A lens converts heterogeneous, logged signals into a signed contrast e where positive supports a candidate/decision and negative opposes it. Lenses are declared (not learned here) and differ by aspect (decoding, RAG, search, tools/agents, evaluators, domain adapters like SSM-Chem). Downstream math then maps e → a_in, a_out → RSI.
General form (declare once per lens).
e := ( SUM_i alpha_i * P_i - SUM_j beta_j * N_j ) / Unit
• P_i: positive evidence (e.g., citation hits, intent match).
• N_j: penalties (e.g., toxicity gap, contradiction, risk).
• alpha_i, beta_j > 0: declared weights.
• Unit > 0: declared scale so |e| stays in a workable range.
• Direction convention: positive supports, negative opposes — keep invariant.
Mapping preview (used next in 3.2).
a_in = tanh( -c * e_in )
a_out = tanh( +c * e_out ) # c > 0 declared per lens
Split evidence into “in” (penalties) vs “out” (support) when helpful; otherwise use one channel and set the other to 0.
Design rules (good lenses).
- Dimensionless: normalize so
ehas no units. - Monotone: more good evidence ⇒ larger
e; more risk ⇒ smallere. - Sparse & simple: 2–5 terms beat long mixes; name terms clearly.
- Stable ranges: pick
Unitandcso typical|e| ∈ [0.2, 1.5]. - Publishable: every term/weight is visible in logs; no hidden factors.
Aspect lenses (ready patterns).
# A) Decoding / Beam Pick (per-candidate)
e_out := (intent_match + constraint_satisfaction + evidence_gain)
e_in := (policy_gap + contradiction + style_violation)
e := (e_out - e_in) / Unit
# B) RAG (per-document)
e_out := (semantic_gain + citation_hit + source_authority)
e_in := (toxicity_gap + policy_risk + staleness_penalty)
e := (e_out - e_in) / Unit
# C) Search (internet / intranet / local)
e := (alpha*hit_quality + beta*freshness + gamma*semantic_match - delta*risk_penalty) / Unit
# D) Tools & Agents (per-step)
e_out := (tool_success_rate + schema_match - repair_distance)
e_in := (error_rate + latency_spike + contradiction_rate)
e := (e_out - e_in) / Unit
# E) Evaluators / Judges (policy & intent)
e := (policy_hits + intent_fit + style_fit - safety_flags - conflict) / Unit
# F) Domain adapter (SSM-Chem example)
e := (yield_gain + selectivity_gain - hazard_penalty - sensitivity_risk) / Unit
Worked minis (calculator-fast).
• Decoding (single item).intent_match=0.7, constraint_satisfaction=0.4, evidence_gain=0.3, policy_gap=0.2, contradiction=0.0, style_violation=0.1, Unit=1e_out=1.4 ; e_in=0.3 ; e=(1.4-0.3)/1 = 1.1.
• Search (two results). alpha=1.0, beta=0.5, gamma=0.7, delta=0.8, Unit=1
A (0.9,0.6,0.7,0.2) → e=1.53 ; B (0.8,0.2,0.5,0.5) → e=0.85 → A gets higher alignment and RSI.
• Tools (retry decision).tool_success_rate=0.8, schema_match=0.6, repair_distance=0.2, error_rate=0.3, latency_spike=0.2, contradiction_rate=0.1e_out=1.2, e_in=0.6, e=0.6 → moderate positive; retry likely if RSI_env stays ≥ A0.
Traditional vs SSM-AI (building a chooser).
Goal: turn heterogeneous signals into a consistent, fair selector.
• Traditional: weighted sums → raw score; fragile tuning; unbounded; order/shard effects.
• SSM-AI: declare e, map to bounded alignments, pool in u-space, produce RSI ∈ (-1,+1); bounded, order-invariant, comparable under the same manifest.
Pseudocode — lens computation (drop-in).
def lens_contrast(positive_terms, negative_terms, Unit=1.0):
P = sum(w * v for (w, v) in positive_terms)
N = sum(w * v for (w, v) in negative_terms)
return (P - N) / max(Unit, 1e-12) # dimensionless e
# Decoding (candidate)
e_decode = lens_contrast(
positive_terms=[(1.0, intent_match), (1.0, constraint_sat), (1.0, evidence_gain)],
negative_terms=[(1.0, policy_gap), (1.0, contradiction), (1.0, style_violation)],
Unit=1.0)
# Search (result)
e_search = lens_contrast(
positive_terms=[(alpha, hit_quality), (beta, freshness), (gamma, semantic_match)],
negative_terms=[(delta, risk_penalty)],
Unit=1.0)
QA checklist (for a good lens).
• Sign sanity: raise a positive term ⇒ e increases; raise a penalty ⇒ e decreases.
• Range sanity: typical |e| in [0.2, 1.5] (adjust Unit or weights).
• Stability: small input perturbations ⇒ small changes in e.
• Transparency: all terms/weights logged and stamped; no hidden multipliers.
One-line takeaway. A lens is a tiny, declared formula that converts evidence into a signed contrast e; everything downstream (alignment, pooling, RSI) is universal and identical across aspects.
Navigation
Previous: SSM-AI – Canon — Streaming Fuse & M2 Lane Ops (2.3, 2.4)
Next: SSM-AI – Symmetric Maps → Alignment (3.2)
Directory of Pages
SSM-AI — Table of Contents