easygraph.nn.convs.hypergraphs.hgnn_conv module#

class easygraph.nn.convs.hypergraphs.hgnn_conv.HGNNConv(*args: Any, **kwargs: Any)[source]#

Bases: Module

The HGNN convolution layer proposed in Hypergraph Neural Networks paper (AAAI 2019). Matrix Format:

\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{D}_v^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta} \right).\]

where \(\mathbf{X}\) is the input vertex feature matrix, \(\mathbf{H}\) is the hypergraph incidence matrix, \(\mathbf{W}_e\) is a diagonal hyperedge weight matrix, \(\mathbf{D}_v\) is a diagonal vertex degree matrix, \(\mathbf{D}_e\) is a diagonal hyperedge degree matrix, \(\mathbf{\Theta}\) is the learnable parameters.

Parameters:
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • out_channels (int) – \(C_{out}\) is the number of output channels.

  • bias (bool) – If set to False, the layer will not learn the bias parameter. Defaults to True.

  • use_bn (bool) – If set to True, the layer will use batch normalization. Defaults to False.

  • drop_rate (float) – If set to a positive number, the layer will use dropout. Defaults to 0.5.

  • is_last (bool) – If set to True, the layer will not apply the final activation and dropout functions. Defaults to False.

forward(X: torch.Tensor, hg: Hypergraph) torch.Tensor[source]#

The forward function.

Parameters:
  • X (torch.Tensor) – Input vertex feature matrix. Size \((N, C_{in})\).

  • hg (eg.Hypergraph) – The hypergraph structure that contains \(N\) vertices.