grelu.lightning.metrics#

Metrics to measure performance of a predictive sequence model These metrics should produce an output value per task or averaged across tasks

Classes#

BestF1

Metric class to calculate the best F1 score for each task.

MSE

Metric class to calculate the MSE for each task.

PearsonCorrCoef

Metric class to calculate the Pearson correlation coefficient for each task.

Module Contents#

class grelu.lightning.metrics.BestF1(num_labels: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the best F1 score for each task.

Parameters:
  • num_labels – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Probabilities of shape (N, n_tasks, L) target: Ground truth labels of shape (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the best F1 score

average[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
class grelu.lightning.metrics.MSE(num_outputs: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the MSE for each task.

Parameters:
  • num_outputs – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Predictions of shape (N, n_tasks, L) target: Ground truth labels (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the MSE

average[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
class grelu.lightning.metrics.PearsonCorrCoef(num_outputs: int = 1, average: bool = True)[source]#

Bases: torchmetrics.Metric

Metric class to calculate the Pearson correlation coefficient for each task.

Parameters:
  • num_outputs – Number of tasks

  • average – If true, return the average metric across tasks. Otherwise, return a separate value for each task

As input to forward and update the metric accepts the following input:

preds: Predictions of shape (N, n_tasks, L) target: Ground truth labels of shape (N, n_tasks, L)

As output of forward and compute the metric returns the following output:

output: A tensor with the Pearson coefficient.

pearson[source]#
average[source]#
update(preds: torch.Tensor, target: torch.Tensor) None[source]#
compute() torch.Tensor[source]#
reset() None[source]#