Equivariant Geometric Graph Neural Networks

Equivariant Geometric Graph Neural Networks

Message Passing

Most graph neural networks are formulated under the message passing framework, where node representations are updated by exchanging information with neighboring nodes. At each layer, information is propagated along graph edges through a sequence of message construction, aggregation, and update steps.

For a given edge connecting two nodes, it is convenient to define a direction for information flow. Although molecular graphs are typically undirected, message passing is implemented as two directed operations per edge to simplify notation and computation. In this formulation, one node generates a message based on its current state, which is then sent to its neighbor and aggregated with other incoming messages.

We can consider node j as the message sender, while node i is the message receiver.

EGNN Message Passing Schematic

This sender–receiver convention allows GNNs to model asymmetric interactions, incorporate edge features, and naturally extend to directed graphs and attention-based mechanisms. In physical systems, this perspective aligns with the idea that each atom contributes an interaction-dependent influence to its neighbors, which are then combined to determine the local environment of the receiving atom.

Equivariance vs Invariance

Invariance: A function is invariant if its output does not change when the input is transformed.

“Move or rotate the molecule/protein — the prediction stays the same.”

Example:

Equivariance: A function is equivariant if its output transforms in the same way as the input.

“Move or rotate the molecule/protein — the output moves or rotates in exactly the same way. This is associated with the group transformation (group theory).”

Example:

In molecular and protein systems, physical laws are independent of the choice of coordinate system. If a molecule is translated or rotated in space, its energy, binding affinity, and chemical identity do not change. Forces and velocities, however, must rotate consistently with the molecule.

Equivariant geometric GNNs encode this prior directly into the model architecture, ensuring that predictions transform correctly under Euclidean symmetries.

Group Theory

Euclidean Groups

A function \(f\) is equivariant to group \(G\) if: \[ f(g \cdot x) = g \cdot f(x), \quad \forall g \in G \]

Quantity Transformation under rotation
Atomic coordinates Rotate
Interatomic distances Invariant
Energy Invariant
Forces Rotate
Dipole moments Rotate

Traditional GNNs often enforce invariance only. Equivariant GNNs preserve the full transformation structure.

Design Principles

Equivariant models rely on three key ideas:

  1. Relative geometry only
    • Use \(x_i - x_j\), not absolute positions
  2. Scalar nonlinearities
    • Apply nonlinear functions only to invariant quantities
  3. Structured feature types
    • Distinguish scalars, vectors, and higher-order tensors

These constraints guarantee equivariance by construction.


EGNN

Philosophy

E(n) Equivariant Graph Neural Networks (EGNN) aims to provide the simplest possible equivariant GNN:

Mathematical Structure

Each node has:

Messages depend only on invariant quantities: \[ m_{ij} = \phi_e(h_i^{l}, h_j^{l}, \|x_i^{l} - x_j^{l}\|^2, a_{ij}) \]

Coordinates are updated using scaled relative displacements: \[ x_i^{l+1} = x_i^{l} + C\sum_j (x_i^{l} - x_j^{l}) \, \phi_x(m_{ij}) \]

Nodes are updated based on the aggregation of messages \[ m_i = \sum_j (m_{ij}) \]

\[ h_i^{l+1} = \phi_h(h_i^{l}, m_i) \]

Why This Is Equivariant

Strengths

Weaknesses

Typical Use Cases


SE(3)-Transformer

Philosophy

SE(3)-Transformers pursue maximum expressivity, explicitly modeling the representation theory of the rotation group.

They treat node features as irreducible representations (irreps):

Equivariant Attention

Messages are constructed using:

Conceptually: \[ m_{ij}^{(l)} = \sum_{l’} \alpha_{ij} \, R_{l,l’}(r_{ij}) \, Y_{l,m}(\hat{r}_{ij}) \, h_j^{(l’)} \]

Why This Is Powerful

Strengths

Weaknesses

Typical Use Cases


EGNN vs SE(3)-Transformer

Aspect EGNN SE(3)-Transformer
Equivariance E(n) SE(3)
Feature types Scalars only Scalars + vectors + tensors
Angular info Implicit (limited) Explicit (via spherical harmonics)
Complexity Low High
Speed Fast Slow
Expressivity Moderate Very high
Implementation Simple PyTorch Requires equivariant ops

Relation to Physics-Based Modeling

Equivariant GNNs bridge ML and physics by:

They are often used as:

In practice, they complement rather than replace molecular dynamics and quantum chemistry.


Practical Guidelines for Drug Discovery

A common workflow:

Equivariant GNN → candidate generation → physics-based refinement (MD / docking)


Key References

Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Alchemical Free Energy