easygraph.nn.convs.hypergraphs.unignn_conv module#

class easygraph.nn.convs.hypergraphs.unignn_conv.UniGATConv(*args: Any, **kwargs: Any)[source]#

Bases: Module

The UniGAT convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).

Sparse Format:

\[\begin{split}\left\{ \begin{aligned} \alpha_{i e} &=\sigma\left(a^{T}\left[W h_{\{i\}} ; W h_{e}\right]\right) \\ \tilde{\alpha}_{i e} &=\frac{\exp \left(\alpha_{i e}\right)}{\sum_{e^{\prime} \in \tilde{E}_{i}} \exp \left(\alpha_{i e^{\prime}}\right)} \\ \tilde{x}_{i} &=\sum_{e \in \tilde{E}_{i}} \tilde{\alpha}_{i e} W h_{e} \end{aligned} \right. .\end{split}\]
Parameters:
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • out_channels (int) – \(C_{out}\) is the number of output channels.

  • bias (bool) – If set to False, the layer will not learn the bias parameter. Defaults to True.

  • use_bn (bool) – If set to True, the layer will use batch normalization. Defaults to False.

  • drop_rate (float) – The dropout probability. If dropout <= 0, the layer will not drop values. Defaults to 0.5.

  • atten_neg_slope (float) – Hyper-parameter of the LeakyReLU activation of edge attention. Defaults to 0.2.

  • is_last (bool) – If set to True, the layer will not apply the final activation and dropout functions. Defaults to False.

forward(X: torch.Tensor, hg: Hypergraph) torch.Tensor[source]#

The forward function.

Parameters:
  • X (torch.Tensor) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).

  • hg (eg.Hypergraph) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.

class easygraph.nn.convs.hypergraphs.unignn_conv.UniGCNConv(*args: Any, **kwargs: Any)[source]#

Bases: Module

The UniGCN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).

Sparse Format:

\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= \frac{1}{\sqrt{d_{i}}} \sum_{e \in \tilde{E}_{i}} \frac{1}{\sqrt{\tilde{d}_{e}}} W h_{e} \end{aligned} \right. .\end{split}\]

where \(\tilde{d}_{e} = \frac{1}{|e|} \sum_{i \in e} d_{i}\).

Matrix Format:

\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \tilde{\mathbf{D}}_e^{-\frac{1}{2}} \cdot \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{X} \mathbf{\Theta} \right) .\]
Parameters:
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • out_channels (int) – \(C_{out}\) is the number of output channels.

  • bias (bool) – If set to False, the layer will not learn the bias parameter. Defaults to True.

  • use_bn (bool) – If set to True, the layer will use batch normalization. Defaults to False.

  • drop_rate (float) – If set to a positive number, the layer will use dropout. Defaults to 0.5.

  • is_last (bool) – If set to True, the layer will not apply the final activation and dropout functions. Defaults to False.

forward(X: torch.Tensor, hg: Hypergraph) torch.Tensor[source]#

The forward function.

Parameters:
  • X (torch.Tensor) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).

  • hg (eg.Hypergraph) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.

class easygraph.nn.convs.hypergraphs.unignn_conv.UniGINConv(*args: Any, **kwargs: Any)[source]#

Bases: Module

The UniGIN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).

Sparse Format:

\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left((1+\varepsilon) x_{i}+\sum_{e \in E_{i}} h_{e}\right) \end{aligned} \right. .\end{split}\]

Matrix Format:

\[\mathbf{X}^{\prime} = \sigma \left( \left( \left( \mathbf{I} + \varepsilon \right) + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]
Parameters:
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • out_channels (int) – \(C_{out}\) is the number of output channels.

  • eps (float) – \(\varepsilon\) is the learnable parameter. Defaults to 0.0.

  • train_eps (bool) – If set to True, the layer will learn the \(\varepsilon\) parameter. Defaults to False.

  • bias (bool) – If set to False, the layer will not learn the bias parameter. Defaults to True.

  • use_bn (bool) – If set to True, the layer will use batch normalization. Defaults to False.

  • drop_rate (float) – If set to a positive number, the layer will use dropout. Defaults to 0.5.

  • is_last (bool) – If set to True, the layer will not apply the final activation and dropout functions. Defaults to False.

forward(X: torch.Tensor, hg: Hypergraph) torch.Tensor[source]#

The forward function.

Parameters:
  • X (torch.Tensor) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).

  • hg (eg.Hypergraph) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.

class easygraph.nn.convs.hypergraphs.unignn_conv.UniSAGEConv(*args: Any, **kwargs: Any)[source]#

Bases: Module

The UniSAGE convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).

Sparse Format:

\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left(x_{i}+\text { AGGREGATE }\left(\left\{x_{j}\right\}_{j \in \mathcal{N}_{i}}\right)\right) \end{aligned} \right. .\end{split}\]

Matrix Format:

\[\mathbf{X}^{\prime} = \sigma \left( \left( \mathbf{I} + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]
Parameters:
  • in_channels (int) – \(C_{in}\) is the number of input channels.

  • out_channels (int) – \(C_{out}\) is the number of output channels.

  • bias (bool) – If set to False, the layer will not learn the bias parameter. Defaults to True.

  • use_bn (bool) – If set to True, the layer will use batch normalization. Defaults to False.

  • drop_rate (float) – If set to a positive number, the layer will use dropout. Defaults to 0.5.

  • is_last (bool) – If set to True, the layer will not apply the final activation and dropout functions. Defaults to False.

forward(X: torch.Tensor, hg: Hypergraph) torch.Tensor[source]#

The forward function.

Parameters:
  • X (torch.Tensor) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).

  • hg (eg.Hypergraph) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.