Skip to content

Mathematical Foundations

A rigorous introduction to the mathematics behind Versor.

1. Clifford Algebra \(Cl(p, q, r)\)

A Clifford Algebra is generated by \(n = p + q + r\) basis vectors {e_1, \(\ldots\) e_n} satisfying:

\[e_i e_j + e_j e_i = 2 g_{ij}\]

where \(g\) is the metric tensor with signature \((p, q, r)\):

\[e_i^2 = \begin{cases} +1 & i \leq p \\ -1 & p < i \leq p + q \\ 0 & i > p + q \end{cases}\]

The \(r\) degenerate (null) dimensions satisfy \(e_i^2 = 0\). When a null basis vector appears in a product with itself, the entire term vanishes. This is used for Projective Geometric Algebra (PGA), where translations become rotors in null directions.

For distinct basis vectors: \(e_i e_j = -e_j e_i\) (they anti-commute).

Basis Blades

Products of distinct basis vectors form basis blades. In \(n\) dimensions there are \(2^n\) basis blades, one for each subset of \({e_1, \ldots e_n}\):

Grade Count Example (\(n=3\))
0 (scalar) \(\binom{n}{0} = 1\) \(1\)
1 (vector) \(\binom{n}{1} = n\) \(e_1, e_2, e_3\)
2 (bivector) \(\binom{n}{2}\) \(e_{12}, e_{13}, e_{23}\)
3 (trivector) \(\binom{n}{3}\) \(e_{123}\)
\(k\) \(\binom{n}{k}\) \(k\)-blade

A multivector is a general element: \(M = \sum_{I} m_I e_I\) where \(I\) ranges over all subsets.

Binary Indexing

Versor represents basis blades by their binary index. For \(e_{i_1 \cdots i_k}\), the index is \(2^{i_1-1} + \cdots + 2^{i_k-1}\):

  • \(1 \to 0\) (binary 000)
  • \(e_1 \to 1\) (binary 001)
  • \(e_2 \to 2\) (binary 010)
  • \(e_{12} \to 3\) (binary 011)
  • \(e_3 \to 4\) (binary 100)

The grade of a blade equals the popcount (number of set bits) of its index.

2. The Geometric Product

The fundamental operation. For two multivectors \(A\) and \(B\):

\[(AB)_k = \sum_{i \oplus j = k} \sigma(i, j) \cdot A_i \cdot B_j\]

where \(i \oplus j\) is XOR (the result blade index), and \(\sigma(i, j)\) is the sign factor combining:

  1. Commutation sign: Number of transpositions needed to reorder basis vectors. If basis vector \(e_a\) in \(A\) must pass \(m\) basis vectors in \(B\) that have lower index, the sign contribution is \((-1)^m\).

  2. Metric sign: For each shared basis vector \(e_k\) (where bit \(k\) is set in both \(i\) and \(j\)), if \(k > p\) then \(e_k^2 = -1\), contributing a factor of \(-1\).

Versor precomputes these signs in the Cayley table (a \(2^n \times 2^n\) lookup of indices and signs).

Decomposition

The geometric product decomposes into:

\[ab = a \cdot b + a \wedge b\]
  • Inner product \(a \cdot b\): lowers grade (projection). Symmetric for vectors.
  • Outer product \(a \wedge b\): raises grade (area/volume). Anti-symmetric for vectors.

3. Rotors

A rotor is an even-grade multivector satisfying \(R\tilde{R} = 1\) (unit versor condition). The reversion \(\tilde{R}\) reverses the order of basis vectors in each blade:

\[\widetilde{e_{i_1 \cdots i_k}} = e_{i_k \cdots i_1} = (-1)^{k(k-1)/2} e_{i_1 \cdots i_k}\]

Generating Rotors

A rotor for rotation by angle \(\theta\) in the plane defined by unit bivector \(\hat{B}\) is:

\[R = \exp(-\hat{B}\theta/2) = \cos(\theta/2) - \hat{B}\sin(\theta/2)\]

In Versor, the RotorLayer learns bivector weights \(B\) (not necessarily unit) and computes \(R = \exp(-B/2)\) using a signature-aware closed-form formula:

  • Elliptic (\(B^2 < 0\), Euclidean planes): \(\exp(B) = \cos\theta + \frac{\sin\theta}{\theta} B\) where \(\theta = \sqrt{-B^2}\)
  • Hyperbolic (\(B^2 > 0\), Minkowski planes): \(\exp(B) = \cosh\theta + \frac{\sinh\theta}{\theta} B\) where \(\theta = \sqrt{B^2}\)
  • Parabolic (\(B^2 \approx 0\), null planes): \(\exp(B) \approx 1 + B\)

This is exact for simple bivectors (all bivectors in \(n \leq 3\)). For non-simple bivectors (\(n \geq 4\)), Versor provides exp_decomposed() which decomposes via GA power iteration into simple components, exponentiates each in closed form, and composes via geometric product. A Taylor series fallback (_exp_taylor) is retained for edge cases.

The Sandwich Product

A rotor transforms a multivector via the sandwich product:

\[x' = R x \tilde{R}\]

Properties: - Grade-preserving: If \(x\) is grade-\(k\), so is \(x'\) - Isometric: \(|x'| = |x|\) (norm is preserved) - Composable: \(R_2(R_1 x \tilde{R}_1)\tilde{R}_2 = (R_2 R_1) x \widetilde{(R_2 R_1)}\) - Smooth: Continuous in the bivector parameters

Parameter Count

A rotation in \(n\) dimensions has \(\binom{n}{2}\) degrees of freedom (one per bivector basis blade). Compare: - Rotation matrix: \(n^2\) parameters (\(n(n-1)/2\) independent) - Rotor: \(\binom{n}{2}\) parameters (all independent, no redundancy)

4. Metric Signatures

Euclidean \(Cl(p, 0)\)

All basis vectors square to \(+1\). Rotors perform standard rotations.

  • \(Cl(3, 0)\): 8 blades, 3 bivectors. Standard 3D rotations (molecular geometry, point clouds).
  • \(Cl(6, 0)\): 64 blades, 15 bivectors. High-dimensional embedding rotations.

Minkowski \(Cl(p, q)\)

Mixed signature. Basis vectors \(e_1, \ldots, e_p\) square to \(+1\); \(e_{p+1}, \ldots, e_n\) square to \(-1\).

  • \(Cl(1, 1)\): 4 blades, 1 bivector. The bivector \(e_{12}\) generates Lorentz boosts (hyperbolic rotations) instead of circular rotations.
  • \(Cl(1, 3)\): Spacetime algebra. Rotors encode both spatial rotations and Lorentz boosts.

Conformal \(Cl(n+1, 1)\)

Adds two extra dimensions to represent the point at infinity (\(e_\infty\)) and the origin (\(e_o\)). In this representation, translations, rotations, and dilations are all rotors.

  • \(Cl(4, 1)\): Conformal model of 3D Euclidean space.

5. Grade Projection

The grade-\(k\) projection of a multivector extracts only components with exactly \(k\) basis vectors:

\[\langle M \rangle_k = \sum_{\text{popcount}(I)=k} m_I e_I\]

Versor implements this by masking: for each index \(I\), check if popcount(I) == k.

6. Exponentiation Methods

Closed-Form (Default for Bivectors)

For simple bivectors, Versor uses the signature-aware closed-form described in Section 3. This requires zero geometric products and is exact.

Decomposed Exponential (Non-Simple Bivectors)

For \(n \geq 4\), bivectors may be non-simple (cannot be written as a single wedge product \(u \wedge v\)). Versor decomposes them via GA power iteration (Pence et al. 2025):

  1. Extract the dominant simple component \(B_1\) via right-contraction power iteration
  2. Subtract: \(B_{\text{residual}} = B - B_1\)
  3. Repeat until residual is negligible
  4. Exponentiate each simple component in closed form
  5. Compose via geometric product: \(\exp(B) \approx \exp(B_1) \cdot \exp(B_2) \cdots\)

Scaling and Squaring (Taylor Fallback)

Computing \(\exp(A)\) directly via Taylor series can be numerically unstable for large \(\|A\|\). The fallback uses scaling and squaring:

  1. Find \(k\) such that \(\|A\| / 2^k \leq 1\)
  2. Compute \(E = \exp(A / 2^k)\) via Taylor series (order 8)
  3. Square \(k\) times: \(\exp(A) = E^{2^k}\)

This ensures the Taylor series converges quickly on a small argument, then recovers the full result through repeated squaring.

7. Multi-Rotor Superposition

A single rotor rotates in one plane. The MultiRotorLayer uses \(K\) rotors in parallel:

\[x' = \sum_{k=1}^{K} w_k R_k x \tilde{R}_k\]

where \(w_k\) are learned mixing weights and each \(R_k = \exp(-B_k/2)\) rotates in a different plane. This is a spectral decomposition of the transformation: complex geometric operations are expressed as weighted sums of simple rotations.

Complexity: \(O(K \cdot n^2)\) bivector parameters instead of \(O(2^n)\) for a general multivector transformation.

8. Justification of the \(O(K \cdot n^2)\) Approximation

The complexity reduction from \(O(2^n)\) (for a full linear map on the multivector space) to \(O(K \cdot n^2)\) (for \(K\) rotors) is not merely an algorithmic optimization; it is rooted in deep geometric principles.

Geometric Spectral Decomposition

Just as any signal can be decomposed into a sum of sinusoids (Fourier Transform), we posit that any geometrically meaningful transformation can be approximated by a weighted sum of simple rotations (rotors). - A single rotor \(R = \exp(-B/2)\) represents an atomic geometric operation. - The weighted sum \(\sum w_k R_k x \tilde{R}_k\) effectively constructs a "Fourier series" on the group of rotations (Spin group). - By limiting \(K\), we regularize the model to learn only the most significant geometric features, filtering out high-frequency noise that would require a dense \(2^n \times 2^n\) matrix to represent.

Cartan-Dieudonné Theorem

The Cartan-Dieudonné Theorem states that every orthogonal transformation in an \(n\)-dimensional space can be composed of at most \(n\) reflections. Since a rotor is the product of two reflections, any rotation can be decomposed into a sequence of simple rotors. - While the theorem applies to composition (\(R = R_k \dots R_1\)), our additive model \(\sum w_k (R_k x \tilde{R}_k)\) can be seen as a linearized approximation or an ensemble of these fundamental paths. - This theoretical guarantee suggests that the space of rotors is "rich enough" to cover the manifold of interest without needing the full linear group \(GL(2^n)\).

Geometric Sparsity

Real-world data is geometrically sparse. - In a high-dimensional feature space (e.g., \(n=32 \to 2^{32}\) dimensions), useful transformations rarely involve arbitrary mixings of all \(2^{32}\) components. - Meaningful operations tend to be local in "grade-space" (e.g., rotating vectors to vectors, bivectors to bivectors) and local in "basis-space" (involving only a few related planes). - The \(O(K \cdot n^2)\) parameterization enforces this sparsity. It forces the network to find the specific 2D planes (bivectors) where the action happens, rather than learning a dense matrix that statistically correlates everything with everything.

9. The Power of Non-Orthogonal Rotors

While the Cartan-Dieudonné theorem guarantees decomposition into a minimal set of orthogonal planes, Versor intentionally breaks this rigidity. We move from a minimal Basis to an overcomplete Frame

Geometric Filtering

When multiple rotors operate in overlapping, non-orthogonal planes, they create complex geometric interference patterns. These patterns act as band-pass filters for manifold curvature, effectively pruning non-geometric noise. The network learns to "tune" these interferences to resonate only with the true geometric structure of the data, leading to a much cleaner and more representative latent space.

Shared Backbone Effect

Non-orthogonal planes form an overcomplete geometric frame. This redundancy allows different feature channels to "share" the same latent geometric features, acting as a shared backbone. Much like multi-task learning, this forced sharing regularizes the model and allows it to generalize geometric truths across disparate parts of the input space.

Redundancy for Robustness

By moving from rigid orthogonal bases to flexible, overcomplete non-orthogonal frames, Versor achieves significantly higher resilience against data perturbations. The redundancy in the frame means that if one part of the input is noisy or corrupted, the remaining overlapping rotors can still reconstruct the underlying geometry. Furthermore, this overparameterization (in terms of planes, not just raw weights) leads to smoother gradient flows during training, as the optimizer has multiple redundant geometric paths to reach the same transformation.

10. Lipschitz Continuity

A function \(f: X \to Y\) is Lipschitz continuous with constant \(K\) if:

\[d_Y(f(x_1), f(x_2)) \leq K d_X(x_1, x_2)\]

for all \(x_1, x_2 \in X\). For neural networks, constraining \(K\) (usually \(K \approx 1\)) is crucial for:

  1. Robustness: Limiting sensitivity to adversarial perturbations.
  2. Stability: Preventing gradient explosion/vanishing in deep networks.
  3. Generative Modeling: Ensuring the Kantorovich-Rubinstein duality holds for Wasserstein GANs (WGANs) and other optimal transport methods.

The Isometric Advantage

In Versor, the core operation is the rotor sandwich product \(x' = R x \tilde{R}\). Because rotors are unit versors (\(R\tilde{R} = 1\)), this operation is an isometry:

\[\|R x \tilde{R}\| = \|x\|\]

This means the rotor transformation naturally has a Lipschitz constant of exactly 1.

Unlike standard neural networks, which must use spectral normalization, weight clipping, or gradient penalties to force Lipschitz constraints (often approximately), Versor's RotorLayer satisfy this property by construction and GeometricGELU and CliffordLayerNorm only control Radial Direction's Scale.

Note on Current Implementation: While rotor operations are strictly isometric, the current CliffordLinear layer uses standard scalar weight matrices for channel mixing, which do not inherently guarantee 1-Lipschitz continuity. Our roadmap includes replacing these with compositions of irreducible rotors to achieve end-to-end geometric Lipschitz guarantees.