easygraph.nn.convs.hypergraphs package
Submodules
easygraph.nn.convs.hypergraphs.dhcf_conv module
- class easygraph.nn.convs.hypergraphs.dhcf_conv.JHConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The Jump Hypergraph Convolution layer proposed in Dual Channel Hypergraph Collaborative Filtering paper (KDD 2020).
Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{D}_v^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta} + \mathbf{X} \right).\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.halfnlh_conv module
easygraph.nn.convs.hypergraphs.hgnn_conv module
- class easygraph.nn.convs.hypergraphs.hgnn_conv.HGNNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HGNN convolution layer proposed in Hypergraph Neural Networks paper (AAAI 2019). Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{D}_v^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta} \right).\]where \(\mathbf{X}\) is the input vertex feature matrix, \(\mathbf{H}\) is the hypergraph incidence matrix, \(\mathbf{W}_e\) is a diagonal hyperedge weight matrix, \(\mathbf{D}_v\) is a diagonal vertex degree matrix, \(\mathbf{D}_e\) is a diagonal hyperedge degree matrix, \(\mathbf{\Theta}\) is the learnable parameters.
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hgnnp_conv module
- class easygraph.nn.convs.hypergraphs.hgnnp_conv.HGNNPConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HGNN + convolution layer proposed in HGNN+: General Hypergraph Neural Networks paper (IEEE T-PAMI 2022).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} m_{\beta}^{t} &=\sum_{\alpha \in \mathcal{N}_{v}(\beta)} M_{v}^{t}\left(x_{\alpha}^{t}\right) \\ y_{\beta}^{t} &=U_{e}^{t}\left(w_{\beta}, m_{\beta}^{t}\right) \\ m_{\alpha}^{t+1} &=\sum_{\beta \in \mathcal{N}_{e}(\alpha)} M_{e}^{t}\left(x_{\alpha}^{t}, y_{\beta}^{t}\right) \\ x_{\alpha}^{t+1} &=U_{v}^{t}\left(x_{\alpha}^{t}, m_{\alpha}^{t+1}\right) \\ \end{aligned} \right.\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-1} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{X} \mathbf{\Theta} \right).\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hnhn_conv module
- class easygraph.nn.convs.hypergraphs.hnhn_conv.HNHNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HNHN convolution layer proposed in HNHN: Hypergraph Networks with Hyperedge Neurons paper (ICML 2020).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hypergcn_conv module
- class easygraph.nn.convs.hypergraphs.hypergcn_conv.HyperGCNConv(in_channels: int, out_channels: int, use_mediator: bool = False, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HyperGCN convolution layer proposed in HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs paper (NeurIPS 2019).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
use_mediator (
str
) – Whether to use mediator to transform the hyperedges to edges in the graph. Defaults toFalse
.bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph, cached_g: Graph | None = None) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.cached_g (
dhg.Graph
) – The pre-transformed graph structure from the hypergraph structure that contains \(N\) vertices. If not provided, the graph structure will be transformed for each forward time. Defaults toNone
.
- training: bool
easygraph.nn.convs.hypergraphs.unignn_conv module
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGATConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, atten_neg_slope: float = 0.2, is_last: bool = False)[source]
Bases:
Module
The UniGAT convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} \alpha_{i e} &=\sigma\left(a^{T}\left[W h_{\{i\}} ; W h_{e}\right]\right) \\ \tilde{\alpha}_{i e} &=\frac{\exp \left(\alpha_{i e}\right)}{\sum_{e^{\prime} \in \tilde{E}_{i}} \exp \left(\alpha_{i e^{\prime}}\right)} \\ \tilde{x}_{i} &=\sum_{e \in \tilde{E}_{i}} \tilde{\alpha}_{i e} W h_{e} \end{aligned} \right. .\end{split}\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – The dropout probability. Ifdropout <= 0
, the layer will not drop values. Defaults to0.5
.atten_neg_slope (
float
) – Hyper-parameter of theLeakyReLU
activation of edge attention. Defaults to0.2
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGCNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniGCN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= \frac{1}{\sqrt{d_{i}}} \sum_{e \in \tilde{E}_{i}} \frac{1}{\sqrt{\tilde{d}_{e}}} W h_{e} \end{aligned} \right. .\end{split}\]where \(\tilde{d}_{e} = \frac{1}{|e|} \sum_{i \in e} d_{i}\).
Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \tilde{\mathbf{D}}_e^{-\frac{1}{2}} \cdot \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGINConv(in_channels: int, out_channels: int, eps: float = 0.0, train_eps: bool = False, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniGIN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left((1+\varepsilon) x_{i}+\sum_{e \in E_{i}} h_{e}\right) \end{aligned} \right. .\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \left( \left( \mathbf{I} + \varepsilon \right) + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
eps (
float
) – \(\varepsilon\) is the learnable parameter. Defaults to0.0
.train_eps (
bool
) – If set toTrue
, the layer will learn the \(\varepsilon\) parameter. Defaults toFalse
.bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniSAGEConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniSAGE convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left(x_{i}+\text { AGGREGATE }\left(\left\{x_{j}\right\}_{j \in \mathcal{N}_{i}}\right)\right) \end{aligned} \right. .\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \left( \mathbf{I} + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool