grelu.lightning.metrics#

grelu.lightning.metrics contains custom metrics to measure the performance of sequence-to-function models. These metrics are used in grelu.lightning.

All metrics inherit from the torchmetrics.Metric class and have __init__, update and compute functions defined. All metrics produce an output value per task, which can optionally be averaged across tasks by setting average=True.

Classes#

BestF1

Metric class to calculate the best F1 score for each task.

MSE

Metric class to calculate the MSE for each task.

PearsonCorrCoef

Metric class to calculate the Pearson correlation coefficient for each task.

Module Contents#

class grelu.lightning.metrics.BestF1(num_labels: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the best F1 score for each task.

Parameters:
  • num_labels – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Probabilities of shape (N, n_tasks, L) target: Ground truth labels of shape (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the best F1 score

average = True[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
class grelu.lightning.metrics.MSE(num_outputs: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the MSE for each task.

Parameters:
  • num_outputs – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Predictions of shape (N, n_tasks, L) target: Ground truth labels (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the MSE

average = True[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
class grelu.lightning.metrics.PearsonCorrCoef(num_outputs: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the Pearson correlation coefficient for each task.

Parameters:
  • num_outputs – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Predictions of shape (N, n_tasks, L) target: Ground truth labels of shape (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the Pearson coefficient.

pearson[source]#
average = True[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
reset() None[source]#