grelu.model.heads#

Model head layers to return the final prediction outputs.

Classes#

ConvHead

A 1x1 Conv layer that transforms the the number of channels in the input and then

MLPHead

This block implements the multi-layer perceptron (MLP) module.

Module Contents#

class grelu.model.heads.ConvHead(n_tasks: int, in_channels: int, act_func: str | None = None, pool_func: str | None = None, norm: bool = False)[source]#

Bases: torch.nn.Module

A 1x1 Conv layer that transforms the the number of channels in the input and then optionally pools along the length axis.

Parameters:
  • n_tasks – Number of tasks (output channels)

  • in_channels – Number of channels in the input

  • norm – If True, batch normalization will be included.

  • act_func – Activation function for the convolutional layer

  • pool_func – Pooling function.

forward(x: torch.Tensor) torch.Tensor[source]#
Parameters:

x – Input data.

class grelu.model.heads.MLPHead(n_tasks: int, in_channels: int, in_len: int, act_func: str | None = None, hidden_size: List[int] = [], norm: bool = False, dropout: float = 0.0)[source]#

Bases: torch.nn.Module

This block implements the multi-layer perceptron (MLP) module.

Parameters:
  • n_tasks – Number of tasks (output channels)

  • in_channels – Number of channels in the input

  • in_len – Length of the input

  • norm – If True, batch normalization will be included.

  • act_func – Activation function for the linear layers

  • hidden_size – A list of dimensions for each hidden layer of the MLP.

  • dropout – Dropout probability for the linear layers.

forward(x: torch.Tensor) torch.Tensor[source]#
Parameters:

x – Input data.