decima.model package¶
Submodules¶
decima.model.decima_model module¶
decima.model.lightning module¶
The LightningModel class.
- class decima.model.lightning.LightningModel(model_params, train_params={}, data_params={})[source]¶
Bases:
LightningModule
Wrapper for predictive sequence models
- Parameters:
- __annotations__ = {}¶
- get_task_idxs(tasks, key='name', invert=False)[source]¶
Given a task name or metadata entry, get the task index If integers are provided, return them unchanged
- Parameters:
tasks (
Union
[int
,str
,List
[int
],List
[str
]]) – A string corresponding to a task name or metadata entry, or an integer indicating the index of a task, or a list of strings/integerskey (
str
) – key to model.data_params[“tasks”] in which the relevant task data is stored. “name” will be used by default.invert (
bool
) – Get indices for all tasks except those listed in tasks
- Return type:
- Returns:
The index or indices of the corresponding task(s) in the model’s output.
- make_predict_loader(dataset, batch_size=None, num_workers=None, **kwargs)[source]¶
Make dataloader for prediction
- Return type:
- make_test_loader(dataset, batch_size=None, num_workers=None)[source]¶
Make dataloader for validation and testing
- Return type:
- make_train_loader(dataset, batch_size=None, num_workers=None)[source]¶
Make dataloader for training
- Return type:
- on_save_checkpoint(checkpoint)[source]¶
Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.
- Parameters:
checkpoint (
dict
) – The full checkpoint dictionary before it gets dumped to a file. Implementations of this hook can insert additional data into this dictionary.- Return type:
Example:
def on_save_checkpoint(self, checkpoint): # 99% of use cases you don't need to implement this method checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object
Note
Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.
- predict_on_dataset(dataset, devices=None, num_workers=1, batch_size=6, augment_aggfunc='mean', compare_func=None)[source]¶
Predict for a dataset of sequences or variants
- Parameters:
- Returns:
Model predictions as a numpy array or dataframe
- predict_step(batch, batch_idx, dataloader_idx=0)[source]¶
Predict for a single batch of sequences or variants
- train_on_dataset(train_dataset, val_dataset, checkpoint_path=None)[source]¶
Train model and optionally log metrics to wandb.
- Parameters:
train_dataset (Dataset) – Dataset object that yields training examples
val_dataset (Dataset) – Dataset object that yields training examples
checkpoint_path (str) – Path to model checkpoint from which to resume training. The optimizer will be set to its checkpointed state.
- Returns:
PyTorch Lightning Trainer
- training_step(batch, batch_idx)[source]¶
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Parameters:
batch (
Tensor
) – The output of your data iterable, normally aDataLoader
.batch_idx (
int
) – The index of this batch.dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Return type:
Tensor
- Returns:
Tensor
- The loss tensordict
- A dictionary which can include any keys, but must include the key'loss'
in the case of automatic optimization.None
- In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:
def __init__(self): super().__init__() self.automatic_optimization = False # Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx): opt1, opt2 = self.optimizers() # do training_step with encoder ... opt1.step() # do training_step with decoder ... opt2.step()
Note
When
accumulate_grad_batches
> 1, the loss returned here will be automatically normalized byaccumulate_grad_batches
internally.
- validation_step(batch, batch_idx)[source]¶
Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
- Parameters:
batch (
Tensor
) – The output of your data iterable, normally aDataLoader
.batch_idx (
int
) – The index of this batch.dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Return type:
Tensor
- Returns:
Tensor
- The loss tensordict
- A dictionary. Can include any keys, but must include the key'loss'
.None
- Skip to the next batch.
# if you have one val dataloader: def validation_step(self, batch, batch_idx): ... # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()
will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
decima.model.loss module¶
- class decima.model.loss.TaskWisePoissonMultinomialLoss(total_weight=1, eps=1e-07, debug=False)[source]¶
Bases:
Module
- __annotations__ = {}¶
- __init__(total_weight=1, eps=1e-07, debug=False)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, target)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type:
Tensor
decima.model.metrics module¶
- class decima.model.metrics.DiseaseLfcMSE(pairs, average=True)[source]¶
Bases:
Metric
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'__call__': 'Callable[..., Any]', '__jit_ignored_attributes__': 'ClassVar[list[str]]', '__jit_unused_properties__': 'ClassVar[list[str]]', '_backward_hooks': 'Dict[int, Callable]', '_backward_pre_hooks': 'Dict[int, Callable]', '_buffers': 'Dict[str, Optional[Tensor]]', '_cache': 'Optional[dict[str, Union[List[Tensor], Tensor]]]', '_compiled_call_impl': 'Optional[Callable]', '_defaults': 'dict[str, Union[list, Tensor]]', '_forward_hooks': 'Dict[int, Callable]', '_forward_hooks_always_called': 'Dict[int, bool]', '_forward_hooks_with_kwargs': 'Dict[int, bool]', '_forward_pre_hooks': 'Dict[int, Callable]', '_forward_pre_hooks_with_kwargs': 'Dict[int, bool]', '_is_full_backward_hook': 'Optional[bool]', '_load_state_dict_post_hooks': 'Dict[int, Callable]', '_load_state_dict_pre_hooks': 'Dict[int, Callable]', '_modules': "Dict[str, Optional['Module']]", '_non_persistent_buffers_set': 'Set[str]', '_parameters': 'Dict[str, Optional[Parameter]]', '_persistent': 'dict[str, bool]', '_reductions': 'dict[str, Union[str, Callable[..., Any], None]]', '_state_dict_hooks': 'Dict[int, Callable]', '_state_dict_pre_hooks': 'Dict[int, Callable]', '_version': 'int', 'call_super_init': 'bool', 'compute': 'Callable', 'dump_patches': 'bool', 'forward': 'Callable[..., Any]', 'full_state_update': 'Optional[bool]', 'higher_is_better': 'Optional[bool]', 'is_differentiable': 'Optional[bool]', 'plot_legend_name': 'Optional[str]', 'plot_lower_bound': 'Optional[float]', 'plot_upper_bound': 'Optional[float]', 'training': 'bool', 'update': 'Callable'}¶
- __init__(pairs, average=True)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class decima.model.metrics.WarningCounter(warning_types=None, **kwargs)[source]¶
Bases:
Metric
A TorchMetric to count occurrences of different warning types, including a dedicated category for ‘unknown’ warnings.
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'__call__': 'Callable[..., Any]', '__jit_ignored_attributes__': 'ClassVar[list[str]]', '__jit_unused_properties__': 'ClassVar[list[str]]', '_backward_hooks': 'Dict[int, Callable]', '_backward_pre_hooks': 'Dict[int, Callable]', '_buffers': 'Dict[str, Optional[Tensor]]', '_cache': 'Optional[dict[str, Union[List[Tensor], Tensor]]]', '_compiled_call_impl': 'Optional[Callable]', '_defaults': 'dict[str, Union[list, Tensor]]', '_forward_hooks': 'Dict[int, Callable]', '_forward_hooks_always_called': 'Dict[int, bool]', '_forward_hooks_with_kwargs': 'Dict[int, bool]', '_forward_pre_hooks': 'Dict[int, Callable]', '_forward_pre_hooks_with_kwargs': 'Dict[int, bool]', '_is_full_backward_hook': 'Optional[bool]', '_load_state_dict_post_hooks': 'Dict[int, Callable]', '_load_state_dict_pre_hooks': 'Dict[int, Callable]', '_modules': "Dict[str, Optional['Module']]", '_non_persistent_buffers_set': 'Set[str]', '_parameters': 'Dict[str, Optional[Parameter]]', '_persistent': 'dict[str, bool]', '_reductions': 'dict[str, Union[str, Callable[..., Any], None]]', '_state_dict_hooks': 'Dict[int, Callable]', '_state_dict_pre_hooks': 'Dict[int, Callable]', '_version': 'int', 'call_super_init': 'bool', 'compute': 'Callable', 'dump_patches': 'bool', 'forward': 'Callable[..., Any]', 'full_state_update': 'Optional[bool]', 'higher_is_better': <class 'bool'>, 'is_differentiable': <class 'bool'>, 'plot_legend_name': 'Optional[str]', 'plot_lower_bound': 'Optional[float]', 'plot_upper_bound': 'Optional[float]', 'training': 'bool', 'update': 'Callable'}¶
- __init__(warning_types=None, **kwargs)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- update(warnings)[source]¶
Update the internal state with new warnings.
- Parameters:
warnings (
List
[WarningType
]) – A list of warning strings from a batch.