Functional¶
Stateless operations. No learnable parameters.
Activations¶
GeometricGELU
¶
Bases: Module
Geometric GELU activation: x' = x * GELU(||x|| + b) / ||x||.
Scales magnitude while preserving direction.
Attributes:
| Name | Type | Description |
|---|---|---|
algebra |
CliffordAlgebra
|
The algebra instance. |
bias |
Parameter
|
Learnable bias added to norm. |
Source code in functional/activation.py
__init__(algebra, channels=1)
¶
Initialize Geometric GELU.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
algebra
|
CliffordAlgebra
|
The algebra instance. |
required |
channels
|
int
|
Number of channels. |
1
|
Source code in functional/activation.py
forward(x)
¶
Apply geometric GELU activation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input multivector [..., Dim]. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
torch.Tensor: Activated multivector. |
Source code in functional/activation.py
GradeSwish
¶
Bases: Module
Per-grade gated activation.
Each grade receives an independent sigmoid gate based on its norm.
Attributes:
| Name | Type | Description |
|---|---|---|
algebra |
CliffordAlgebra
|
The algebra instance. |
n_grades |
int
|
Number of grades. |
grade_weights |
Parameter
|
Weights for each grade gate. |
grade_biases |
Parameter
|
Biases for each grade gate. |
Source code in functional/activation.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | |
__init__(algebra, channels=1)
¶
Initialize Grade Swish.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
algebra
|
CliffordAlgebra
|
The algebra instance. |
required |
channels
|
int
|
Number of channels. |
1
|
Source code in functional/activation.py
forward(x)
¶
Apply per-grade gating.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input multivector [..., Dim]. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
torch.Tensor: Activated multivector. |
Source code in functional/activation.py
Losses¶
ChamferDistance
¶
Bases: Module
Symmetric Chamfer distance between two point clouds.
CD(P, Q) = (1/|P|) sum_p min_q ||p-q||^2 + (1/|Q|) sum_q min_p ||q-p||^2
Standard metric for 3D point cloud reconstruction and generation.
Source code in functional/loss.py
forward(pred, target)
¶
Compute Chamfer distance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pred
|
Tensor
|
Predicted point cloud [B, M, 3]. |
required |
target
|
Tensor
|
Target point cloud [B, N, 3]. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Chamfer distance (scalar). |
Source code in functional/loss.py
ConservativeLoss
¶
Bases: Module
Enforces F = -grad(E) conservative force constraint.
Physics: forces should be the negative gradient of energy with respect to atomic positions. Used in molecular dynamics tasks.
Source code in functional/loss.py
forward(energy, force_pred, pos)
¶
Compute conservative force loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
energy
|
Tensor
|
Predicted energy (scalar, requires grad graph). |
required |
force_pred
|
Tensor
|
Predicted forces [N, 3]. |
required |
pos
|
Tensor
|
Atom positions [N, 3] (must have requires_grad=True). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
MSE between predicted forces and -grad(E). |
Source code in functional/loss.py
PhysicsInformedLoss
¶
Bases: Module
Physics-informed loss combining MSE with conservation penalty.
Enforces that global weighted mean of each variable is approximately conserved between forecast and target. Used in weather forecasting.
Source code in functional/loss.py
forward(forecast, target, lat_weights=None)
¶
Compute physics-informed loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
forecast
|
Tensor
|
Predicted state [B, H, W, C]. |
required |
target
|
Tensor
|
Target state [B, H, W, C]. |
required |
lat_weights
|
Tensor
|
Latitude area weights [H]. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Combined MSE + conservation penalty. |
Source code in functional/loss.py
GeometricMSELoss
¶
Bases: Module
Geometric MSE. Euclidean distance in embedding space.
Standard MSE on coefficients.
Source code in functional/loss.py
__init__(algebra=None)
¶
SubspaceLoss
¶
Bases: Module
Subspace Loss. Enforces grade constraints.
Penalizes energy in forbidden grades.
Source code in functional/loss.py
__init__(algebra, target_indices=None, exclude_indices=None)
¶
Initialize grade constraint penalties.
Source code in functional/loss.py
IsometryLoss
¶
Bases: Module
Isometry loss enforcing metric norm preservation.
Ensures transformations preserve the metric norm.
Source code in functional/loss.py
__init__(algebra)
¶
forward(pred, target)
¶
Compares norms.
Source code in functional/loss.py
BivectorRegularization
¶
Bases: Module
Bivector regularization enforcing grade-2 purity.
Penalizes energy outside the target grade (default: grade 2).