Abstract
Extends scalar SSM to higher dimensions. Vectors and matrices carry per-entry alignment a ∈ [−1, +1]. Sums use the associative U,W rapidity accumulator; products use M2 (rapidity-additive) alignment. Collapse recovers classical linear algebra on magnitudes; keeping a exposes how stability concentrates and propagates.
🧭 Conventions
- Sum (⊕ / oplus): n-ary via
U = Σ w_i * atanh(a_i),W = Σ w_i,a' = tanh(U/W)withw_i = |m_i|^gamma(defaultgamma = 1). - Product (⊗ / otimes): M2 per factor:
(m1,a1) ⊗ (m2,a2) = (m1*m2, tanh(atanh(a1) + atanh(a2))). - Zero & one:
(0,+1)is additive identity; multiplicative identity is(1,0). - Clamp: before any
atanh,a <- clamp(a, −1+eps, +1−eps),eps = 1e−6.
📦 Symbolic vectors
A symbolic vector is a finite tuple of symbolic numerals:
v = ( (m1, a1), (m2, a2), …, (mn, an) )
Componentwise addition & negation
(v ⊕ w)_i = (mi, ai) ⊕ (ni, bi)
(−v)_i = −(mi, ai) = (−mi, ai)
Scalar multiplication (real r)
(r ⊙ v)_i = ( r*mi , ai ) # signs live on m; a is not scaled
Embedding for analytics (Φ_beta)
S_beta(m,a) = m * (1 − beta*(1 − a)),beta ∈ [0,1]Φ_beta(m,a) = ( m , S_beta(m,a) )
Vector size functionals
- Default 2D-embedding L2 size (matches scalar size):
||v||_beta = sqrt( Σ_i [ mi^2 + S_beta(mi,ai)^2 ] ) - Strength-only (dashboard):
||v||_beta^S = sqrt( Σ_i S_beta(mi,ai)^2 ) - Collapse: if all
ai = +1, thenS_beta = mi. For exact classical L2, setbeta = 0.
Numeric example (default size)v = ( (3,+1), (4,+0.5) ), beta = 1S_1(3,+1)=3, S_1(4,+0.5)=2||v||_1 = sqrt( 3^2+3^2 + 4^2+2^2 ) = sqrt(38) ≈ 6.164
(Strength-only: sqrt(3^2+2^2) = sqrt(13) ≈ 3.606.)
🧮 Symbolic matrices
A symbolic matrix is an array of pairs:
M = [ (m_ij, a_ij) ] # i=1..p, j=1..n
Matrix addition (entrywise)(M ⊕ N)_ij = M_ij ⊕ N_ij
Matrix–vector multiplication (⊗ with ⊕ accumulation)
For M ∈ R^{p×n} (symbolic) and v ∈ R^n (symbolic):
(M ⊗ v)_i = ⊕_{k=1..n} ( M_{ik} ⊗ v_k )
- Each term uses M2.
- Accumulate the inner sum via U,W over
kto preserve associativity. - Magnitudes follow classical matvec; alignment is the rapidity-mean of term alignments.
Matrix multiplication (⊗ with ⊕ accumulation)
For M ∈ R^{p×r}, N ∈ R^{r×n}:
(M ⊗ N)_{ij} = ⊕_{k=1..r} ( M_{ik} ⊗ N_{kj} )
- M2 per product; inner ⊕ via U,W.
- Associativity of matmul inherits from M2 and the n-ary ⊕ accumulator.
Identity & zero matrices
- Identity: diagonal
(1, 0); off-diagonal(0, +1). - Zero: all entries
(0, +1)(canonical zero-class).
🧪 Worked example (2×2, one entry in detail)
M = [ (1, 0) (2, +0.9)
(0, +1) (1, −0.5) ]
N = [ (1, 0) (0, +1)
(3, +0.7) (1, 0) ]
Compute (M ⊗ N)_{11} = (1,0)⊗(1,0) ⊕ (2,+0.9)⊗(3,+0.7).
- Term A:
(1,0)⊗(1,0) = (1, tanh(0+0)) = (1, 0) - Term B:
(2,+0.9)⊗(3,+0.7)u = atanh(0.9) ≈ 1.472,v = atanh(0.7) ≈ 0.867a' = tanh(u+v) = tanh(2.339) ≈ 0.981→(6, +0.981)
U,W accumulation (γ=1):U = 1*atanh(0) + 6*atanh(0.981) ≈ 0 + 6*2.323 ≈ 13.94W = 1 + 6 = 7a_sum = tanh(U/W) = tanh(1.991) ≈ +0.963
Magnitude: 1 + 6 = 7
So (M ⊗ N)_{11} ≈ (7, +0.963).
(Other entries proceed analogously.)
🧱 Induced matrix sizes (Frobenius-like)
- Symbolic Frobenius (default 2D-embedding):
||M||_beta = sqrt( Σ_{i,j} [ m_ij^2 + S_beta(m_ij, a_ij)^2 ] ) - Strength-only (dashboard):
||M||_beta^S = sqrt( Σ_{i,j} S_beta(m_ij, a_ij)^2 ) - Collapse: if all
a_ij = +1, thenS_beta = m_ij. For exact classical Frobenius, setbeta = 0.
Numeric example (strength-only, β=1)M = [ (2,+1) (1,−0.5); (0,+1) (3,+0.8) ]S_1 entries: 2, −0.5, 0, 2.4 → ||M||_1^S = sqrt(4 + 0.25 + 0 + 5.76) = sqrt(10.01) ≈ 3.17.
📐 Operator bounds (β-embedded)
Form the real matrix A_beta with entries A_beta[ij] = S_beta(m_ij, a_ij).
- The classical operator norm
||A_beta||_2upper-bounds strength propagation for one⊗–⊕layer. - For multi-layer symbolic nets (repeated matvec), track both magnitudes and
||A_beta||; declare thebetaused.
🔁 Invertibility & determinants (notes)
- Collapse:
det_magnitude(M) = det( [m_ij] )is classical. - Symbolic determinant alignment (M2): combine permutation-term alignments via ⊕ in rapidity; zero magnitudes or Zearo-heavy patterns yield fragile outcomes.
- In practice, solves often use the classical solve on
mwhile tracking alignment for diagnostics; exact symbolic inversion requires absence of zero-class obstructions and acceptable condition numbers (seen inA_beta).
💡 Interpretation & takeaway
- A symbolic vector is a direction plus a stability profile over its components.
- A symbolic matrix propagates both sizes and alignments; Nearo-heavy rows/columns can dominate behavior even when magnitudes look strong.
- The U,W accumulator is essential: it preserves associativity of sums inside matvec/matmul.
Bottom line: With M2 products and associative rapidity means for sums, symbolic linear algebra becomes centre-aware. Collapse gives standard results; keepingareveals where stability concentrates and how fragility spreads.
Navigation
Previous → Regularization of alignment
Next → Differential equations (symbolic form)
Disclaimer
Observation only. Reproducible math; domain claims require independent peer review. Defaults: gamma=1, mult_mode=M2, clamp_eps=1e−6, |a|<1.