easygraph.model.hypergraphs package
Submodules
easygraph.model.hypergraphs.dhcf module
- class easygraph.model.hypergraphs.dhcf.DHCF(num_users: int, num_items: int, emb_dim: int, num_layers: int = 3, drop_rate: float = 0.5)[source]
Bases:
Module
The DHCF model proposed in Dual Channel Hypergraph Collaborative Filtering paper (KDD 2020).
Note
The user and item embeddings and trainable parameters are initialized with xavier_uniform distribution.
- Parameters:
num_users (
int
) – The Number of users.num_items (
int
) – The Number of items.emb_dim (
int
) – Embedding dimension.num_layers (
int
) – The Number of layers. Defaults to3
.drop_rate (
float
) – The dropout probability. Defaults to0.5
.
- forward(hg_ui: Hypergraph, hg_iu: Hypergraph) Tuple[Tensor, Tensor] [source]
The forward function.
- Parameters:
hg_ui (
eg.Hypergraph
) – The hypergraph structure that users as vertices.hg_iu (
eg.Hypergraph
) – The hypergraph structure that items as vertices.
- training: bool
easygraph.model.hypergraphs.dhne module
easygraph.model.hypergraphs.hgnn module
- class easygraph.model.hypergraphs.hgnn.HGNN(in_channels: int, hid_channels: int, num_classes: int, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The HGNN model proposed in Hypergraph Neural Networks paper (AAAI 2019).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to 0.5.
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.model.hypergraphs.hgnnp module
- class easygraph.model.hypergraphs.hgnnp.HGNNP(in_channels: int, hid_channels: int, num_classes: int, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The HGNN + model proposed in HGNN+: General Hypergraph Neural Networks paper (IEEE T-PAMI 2022).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to0.5
.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.model.hypergraphs.hnhn module
- class easygraph.model.hypergraphs.hnhn.HNHN(in_channels: int, hid_channels: int, num_classes: int, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The HNHN model proposed in HNHN: Hypergraph Networks with Hyperedge Neurons paper (ICML 2020).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to0.5
.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.model.hypergraphs.hypergcn module
- class easygraph.model.hypergraphs.hypergcn.HyperGCN(in_channels: int, hid_channels: int, num_classes: int, use_mediator: bool = False, use_bn: bool = False, fast: bool = True, drop_rate: float = 0.5)[source]
Bases:
Module
The HyperGCN model proposed in HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs paper (NeurIPS 2019).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_mediator (
str
) – Whether to use mediator to transform the hyperedges to edges in the graph. Defaults toFalse
.fast (
bool
) – If set toTrue
, the transformed graph structure will be computed once from the input hypergraph and vertex features, and cached for future use. Defaults toTrue
.drop_rate (
float
, optional) – Dropout ratio. Defaults to 0.5.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.model.hypergraphs.setgnn module
easygraph.model.hypergraphs.unignn module
- class easygraph.model.hypergraphs.unignn.UniGAT(in_channels: int, hid_channels: int, num_classes: int, num_heads: int, use_bn: bool = False, drop_rate: float = 0.5, atten_neg_slope: float = 0.2)[source]
Bases:
Module
The UniGAT model proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.num_heads (
int
) – The Number of attention head in each layer.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
) – The dropout probability. Defaults to0.5
.atten_neg_slope (
float
) – Hyper-parameter of theLeakyReLU
activation of edge attention. Defaults to 0.2.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
- class easygraph.model.hypergraphs.unignn.UniGCN(in_channels: int, hid_channels: int, num_classes: int, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The UniGCN model proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to0.5
.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
- class easygraph.model.hypergraphs.unignn.UniGIN(in_channels: int, hid_channels: int, num_classes: int, eps: float = 0.0, train_eps: bool = False, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The UniGIN model proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.eps (
float
) – The epsilon value. Defaults to0.0
.train_eps (
bool
) – If set toTrue
, the epsilon value will be trainable. Defaults toFalse
.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to0.5
.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
- class easygraph.model.hypergraphs.unignn.UniSAGE(in_channels: int, hid_channels: int, num_classes: int, use_bn: bool = False, drop_rate: float = 0.5)[source]
Bases:
Module
The UniSAGE model proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
- Parameters:
in_channels (
int
) – \(C_{in}\) is the number of input channels.hid_channels (
int
) – \(C_{hid}\) is the number of hidden channels.num_classes (
int
) – The Number of class of the classification task.use_bn (
bool
) – If set toTrue
, use batch normalization. Defaults toFalse
.drop_rate (
float
, optional) – Dropout ratio. Defaults to0.5
.
- forward(X: Tensor, hg: eg.Hypergraph) Tensor [source]
The forward function.
- Parameters:
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
eg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool