reglm package

Submodules

reglm.dataset module

class reglm.dataset.CharDataset(seqs, labels, seq_len=None)[source]

Bases: torch.utils.data.dataset.Dataset

decode(idxs, is_labeled=False)[source]

Given a torch tensor of tokens, return the decoded sequence as a string.

Parameters
  • idxs (list, torch.LongTensor) – list or 1-D tensor

  • is_labeled (bool) – Whether labels are included

Returns

labeled sequence as a string

encode_label(label)[source]

Encode a label as a torch tensor of tokens

Parameters

label (str) – label token sequence

Returns

torch.LongTensor of shape (label_len,)

encode_seq(seq)[source]

Encode a sequence as a torch tensor of tokens

Parameters

seq (str) – DNA sequence

Returns

torch.LongTensor of shape (seq_len,)

reglm.evolve module

reglm.evolve.evolve(start_seqs, regression_model, seq_len=None, language_model=None, label=None, tol=0.0, specific=None, max_iter=10, device=0, num_workers=1, batch_size=512)[source]

Directed evolution optionally using a language model to filter sequences.

Parameters
  • start_seqs (list) – Starting sequences

  • regression_model (pl.LightningModule) – Regression model

  • seq_len (int) – Sequence length for regression model

  • language_model (pl.LightningModule) – Language model

  • label (str) – Label for language model

  • tol (float) – Tolerance for likelihood filter

  • specific (list) – Task indices if optimizing for task specificity

  • max_iter (int) – Maximum number of iterations for evolution

  • device (int) – GPU index

  • num_workers (int) – Number of workers for regression model

  • batch_size (int) – Batch size for regression model

Returns

Dataframe containing evolution results

Return type

df (pd.DataFrame)

reglm.interpret module

reglm.interpret.ISM(seq, drop_ref=True)[source]

Perform in-silico mutagenesis of a DNA sequence.

Parameters
  • seq (str) – DNA sequence

  • drop_ref (bool) – If True, the original base at the mutation position is dropped.

Returns

List of mutated DNA sequences, of length 3*len(seq) or 4*len(seq)

reglm.interpret.ISM_at_pos(seq, pos, drop_ref=True)[source]

Perform in-silico mutagenesis at a single position in the sequence.

Parameters
  • seq (str) – DNA sequence

  • pos (int) – Position to mutate

  • drop_ref (bool) – If True, the original base at the mutation position is dropped.

Returns

List of mutated DNA sequences, of length 3 or 4

reglm.interpret.ISM_predict(seqs, model, seq_len=None, batch_size=512, device=0, num_workers=8)[source]

Perform in-silico mutagenesis of DNA sequences and make predictions with a regression model to get per-base importance scores

Parameters
  • seqs (list) – List of DNA sequences of equal length

  • model (pl.LightningModule) – regression model

  • seq_len (int) – Maximum sequence length for regression model

  • batch_size (int) – Batch size for prediction

  • num_workers (int) – Number of workers for prediction

  • device (int) – GPU index for prediction

Returns

Array of shape (number of sequences x length of sequences x 4)

Return type

preds (np.array)

reglm.interpret.ISM_score(seqs, preds)[source]

Calculate a per-base importance score from ISM predictions

Parameters
  • seqs (list) – List of sequences

  • preds (np.array) – ISM predictions from seqs

Returns

Array of shape (N x seq_len), containing per-base importance scores

Return type

scores (np.array)

reglm.interpret.generate_random_sequences(n=1, seq_len=1024, seed=None)[source]

Generate random DNA sequences.

Parameters
  • n (int) – Number of sequences to generate (default 1).

  • seq_len (int) – Length of each sequence (default 1024).

  • seed (int) – Seed value for random number generator (default 0).

Returns

Generated sequences as a list of strings.

reglm.interpret.motif_insert(motif_dict, model, label, ref_label, seq_len, n=100)[source]

Insert motifs into random sequences and calculate log-likelihood ratio of each motif given label vs. reference label.

Parameters
  • motif_dict (dict) – Dictionary with key-value pairs such as motif ID: consensus sequence

  • model (pl.LightningModule) – regLM model

  • label (list) – Label for the regLM model

  • ref_label (str) –

  • seq_len (int) – Length of random sequences preceding the motif

  • n (int) – number of random sequences to insert the motif in

Returns

Dataframe containing log likelihood ratios of motif-containing sequences

Return type

(pd.DataFrame)

reglm.interpret.motif_likelihood(seqs, motif, label, model)[source]

Return the log-likelihood of a motif occurring at the end of each of the given sequences.

Parameters
  • seqs (list) – Sequences

  • motif (seq) – Motif sequence

  • label (list) – Label for the regLM model

  • model (pl.LightningModule) – regLM model

Returns

log-likelihoods

Return type

(list)

reglm.lightning module

class reglm.lightning.LightningModel(config=None, ckpt_dir='./checkpoints/hyenadna-medium-160k-seqlen', hyenadna_path='/code/hyena-dna', save_dir='.', lr=0.0001, label_len=None)[source]

Bases: pytorch_lightning.core.module.LightningModule

LightningModule class to train and use autoregressive token-conditioned regLM language models.

Parameters
  • config (dict) – Config dictionary containing model parameters

  • ckpt_dir (str) – Path to directory containing downloaded model checkpoints, or in which they should be downloaded

  • hyenadna_path (str) – Path to cloned hyenaDNA repository

  • save_dir (str) – Directory to save model checkpoints and logs

  • lr (float) – Learning rate

  • label_len (int) – Number of label tokens preceding each DNA sequence

P_labels_given_seqs(seqs, labels, per_pos=True, log=True)[source]
P_seqs(seqs, labels, per_pos=False, log=True)[source]
Parameters
  • seqs (list, str) – Sequences as strings

  • labels (list, str) – Labels as strings

  • log (bool) – Return log likelihood

  • include_end (bool) – Include the end token

Returns

np.array of shape (N)

P_seqs_given_labels(seqs, labels, per_pos=False, log=True, add_stop=True)[source]
Parameters
  • seqs (list, str) – Sequences as strings

  • labels (list, str) – Labels as strings

  • log (bool) – Return log likelihood

  • include_end (bool) – Include the end token

Returns

np.array of shape (N)

compute_accuracy_on_dataset(dataset, batch_size=64, num_workers=8)[source]

Perform inference on a dataset and return per-example accuracy Note: this will include the accuracy of predicting the END token (1)

Parameters
  • dataset (CharDataset) – Inference dataset

  • batch_size (int) – Batch size for inference

  • num_workers (int) – Number of workers for inference

Returns: List of booleans indicating whether the predicted base at each position was equal to the true label or not.

configure_optimizers()[source]

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.

Returns

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • Tuple of dictionaries as described above, with an optional "frequency" key.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

# The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self):
    optimizer = Adam(...)
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": ReduceLROnPlateau(optimizer, ...),
            "monitor": "metric_to_track",
            "frequency": "indicates how often the metric is updated"
            # If "monitor" references validation metrics, then "frequency" should be set to a
            # multiple of "trainer.check_val_every_n_epoch".
        },
    }


# In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self):
    optimizer1 = Adam(...)
    optimizer2 = SGD(...)
    scheduler1 = ReduceLROnPlateau(optimizer1, ...)
    scheduler2 = LambdaLR(optimizer2, ...)
    return (
        {
            "optimizer": optimizer1,
            "lr_scheduler": {
                "scheduler": scheduler1,
                "monitor": "metric_to_track",
            },
        },
        {"optimizer": optimizer2, "lr_scheduler": scheduler2},
    )

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

The frequency value specified in a dict along with the optimizer key is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1:

  • In the former case, all optimizers will operate on the given batch in each optimization step.

  • In the latter, only one optimizer will operate on the given batch at every step.

This is different from the frequency value specified in the lr_scheduler_config mentioned above.

def configure_optimizers(self):
    optimizer_one = torch.optim.SGD(self.model.parameters(), lr=0.01)
    optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01)
    return [
        {"optimizer": optimizer_one, "frequency": 5},
        {"optimizer": optimizer_two, "frequency": 10},
    ]

In this example, the first optimizer will be used for the first 5 steps, the second optimizer for the next 10 steps and that cycle will continue. If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above dict, the scheduler will only be updated when its optimizer is being used.

Examples:

# most cases. no learning rate scheduler
def configure_optimizers(self):
    return Adam(self.parameters(), lr=1e-3)

# multiple optimizer case (e.g.: GAN)
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    return gen_opt, dis_opt

# example with learning rate schedulers
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    dis_sch = CosineAnnealing(dis_opt, T_max=10)
    return [gen_opt, dis_opt], [dis_sch]

# example with step-based learning rate schedulers
# each optimizer has its own scheduler
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    gen_sch = {
        'scheduler': ExponentialLR(gen_opt, 0.99),
        'interval': 'step'  # called after each training step
    }
    dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
    return [gen_opt, dis_opt], [gen_sch, dis_sch]

# example with optimizer frequencies
# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1
# https://arxiv.org/abs/1704.00028
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    n_critic = 5
    return (
        {'optimizer': dis_opt, 'frequency': n_critic},
        {'optimizer': gen_opt, 'frequency': 1}
    )

Note

Some things to know:

  • Lightning calls .backward() and .step() on each optimizer as needed.

  • If learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizers.

  • If you use multiple optimizers, training_step() will have an additional optimizer_idx parameter.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.

  • If you need to control how often those optimizers step or override the default .step() schedule, override the optimizer_step() hook.

decode(idxs)[source]

Decodes indices into DNA sequences

Parameters

idxs (torch.LongTensor) – tensor or array of shape (N, L)

Returns

list of strings

Return type

seqs (list)

encode(seqs, labels, add_start=False, add_stop=False)[source]

Encode sequences and labels as indices for model inference.

Parameters
  • seqs (list, str) – Strings of base tokens

  • label (list, str) – Strings of label tokens

  • add_start (bool) – Whether to add the start token (0)

  • add_stop (bool) – Add an end token (1) after the sequence

Returns

tensor of shape (N, L)

Return type

idxs (torch.LongTensor)

encode_labels(labels, add_start=False)[source]

Encode labels as a list of indices for inference

Parameters
  • labels (list, str) – Strings of label tokens

  • add_start (bool) – Whether to add the start token (0)

Returns

tensor of shape (N, L)

Return type

idxs (torch.LongTensor)

encode_seqs(seqs, add_stop=False)[source]

Encode sequences as lists of indices for inference

Parameters
  • seqs (list, str) – DNA sequence(s) as string or list of strings

  • add_stop (bool) – Whether to add an end token (1) after each sequence

Returns

tensor of shape (N, L) if add_stop is False or (N, L+1) if add_stop is True.

Return type

idxs (torch.LongTensor)

filter_base_probs(probs, normalize=True)[source]

Return probabilities for valid bases only

Parameters
  • probs (torch.tensor, dtype torch.float32) – tensor of shape (N, 16)

  • normalize (bool) – Whether to re-normalize the probabilities at each

  • 1. (position to sum to) –

Returns

tensor of shape (N, 4)

Return type

filtered_probs (torch.FloatTensor)

forward(x, drop_label=True, return_logits=False)[source]
Parameters
  • x (torch.tensor, dtype torch.float32) – tensor of shape (N, L)

  • drop_label (bool) – Whether to drop the predictions for the positions corresponding to label tokens

  • return_logits (bool) – If true, return logits. Otherwise, return probabilities

Returns

tensor of shape

(N, 16, L - label_len) if drop_label is True, or (N, 16, L) if drop_label is False. Note that the prediction for the END token (1) as well as the hypothetical position after it will be included.

Return type

logits (torch.tensor, dtype torch.float32)

generate(labels, max_new_tokens=None, temperature=1.0, top_k=None, top_p=None, normalize_filtered=True, seed=None)[source]
Parameters
  • labels (str, list) – Strings of label tokens

  • max_new_tokens (int) – Maximum number of tokens to add

  • temperature (float) – Temperature

  • top_k (int) – Select the top k bases at each position. Set probabilites of other bases to 0.

  • top_p (float) – Select the top bases at each position until their cumulative probability reaches this value. Set probabilites of other bases to 0.

  • normalize_filtered (bool) – Normalize probabilities to sum to 1 after filtering

  • seed (int) – Random seed for sampling

Returns

List of strings

Return type

seqs (list)

normalize_filtered_probs(filtered_probs)[source]

Normalize probabilities at each position to sum to 1.

Parameters

filtered_probs (torch.floatTensor) – Tensor of shape (N, 16, L) or (N, 16)

Returns

Normalized tensor of the same shape

on_save_checkpoint(checkpoint)[source]

Save data relevant parameters to the model checkpoint on training.

probs_to_likelihood(probs, idxs)[source]

Compute the likelihood of each base in a sequence given model predictions on the sequence.

Parameters
  • probs (torch.FloatTensor) – tensor of shape (N, 16, L)

  • idxs (torch.LongTensor) – tensor of shape (N, L)

Returns

tensor of shape (N, L) containing the probabilities of real bases

sample_idxs(probs, random_state=None, top_k=None, top_p=None, normalize_filtered=True)[source]

Sample from model predictions at a single position to return a single base per example

Parameters
  • probs (torch.tensor, dtype torch.float32) – tensor of shape (N, 16)

  • random_state (torch.Generator) – torch.Generator object

  • top_k (int) – Select the top k bases at each position. Set probabilites of other bases to 0.

  • top_p (float) – Select the top bases at each position until their cumulative probability reaches this value. Set probabilites of other bases to 0.

  • normalize_filtered (bool) – Normalize probabilities to sum to 1 after filtering

Returns

tensor of shape (N)

Return type

idxs (torch.LongTensor)

threshold_probs(filtered_probs, top_k=None, top_p=None)[source]

Threshold the filtered probabilities for valid bases

Parameters
  • filtered_probs (torch.tensor, dtype torch.float32) – tensor of shape (N, 4)

  • top_k (int) – Select the top k bases at each position. Set probabilites of other bases to 0.

  • top_p (float) – Select the top bases at each position until their cumulative probability reaches this value. Set probabilites of other bases to 0.

Returns

tensor of shape (N, 4)

train_on_dataset(train_dataset, val_dataset, batch_size=128, num_workers=8, device=0, max_epochs=3, val_check_interval=5000, weights=None, save_all=False)[source]

Train regLM model.

Parameters
  • train_dataset (CharDataset) – Training dataset

  • val_dataset (CharDataset) – Validation dataset

  • batch_size (int) – Batch size

  • num_workers (int) – Number of workers for training

  • device (int) – GPU index

  • max_epochs (int) – Number of epochs to train

  • val_check_interval (int) – Number of steps after which to check validation loss

Returns

pl.Trainer object

training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters
Returns

Any of.

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'

  • None - Training will skip to the next batch. This is only for automatic optimization.

    This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

If you define multiple optimizers, this step will be called with an additional optimizer_idx parameter.

# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...

If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.

# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}

Note

The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

validation_epoch_end(output)[source]

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters

outputs – List of outputs you defined in validation_step(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.

Returns

None

Note

If you didn’t define a validation_step(), this won’t be called.

Examples

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log("final_metric", final_value)
validation_step(batch, batch_idx)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple val dataloaders used)

Returns

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

reglm.lightning.load_pretrained_model(ckpt_dir='./checkpoints/', model='hyenadna-medium-160k-seqlen', hyenadna_path='/code/hyena-dna')[source]

Load a pretrained hyenaDNA foundation model.

Parameters
  • ckpt_dir (str) – Path to directory containing downloaded model checkpoints, or in which they should be downloaded

  • model (str) – Name of model to load

  • hyenadna_path (str) – Path to cloned hyenaDNA repository

Returns

pre-trained HyenaDNA foundation model

Return type

model (nn.Module)

reglm.metrics module

reglm.metrics.compute_accuracy(model, seqs, shuffle_labels=False, batch_size=64, num_workers=8)[source]

Compute per-base accuracy of a trained regLM model on labeled sequences

Parameters
  • model (pl.LightningModule) – Trained regLM model

  • seqs (pd.DataFrame) – Dataframe containing sequences under ‘Sequence’ and labels under ‘label’.

  • shuffle_labels (bool) – Whether to shuffle the labels among sequences before computing accuracy.

  • batch_size (int) – Batch size for inference

  • num_workers (int) – Number of workers for inference

Returns

original dataframe with added columns for per- base and average accuracy.

Return type

seqs (pd.DataFrame)

reglm.regression module

class reglm.regression.EnformerModel(lr=0.0001, loss='poisson', pretrained=False, dim=1536, depth=11, n_downsamples=7)[source]

Bases: pytorch_lightning.core.module.LightningModule

Enformer-based single-task regression models that can be trained from scratch or finetuned.

Parameters
  • lr (float) – learning rate

  • loss (str) – “poisson” or “mse”

  • pretrained (bool) – If true, initialize from the pretrained enformer model

  • dim (int) – Number of conv layer filters

  • depth (int) – Number of transformer layers

  • n_downsamples (int) – Number of conv/pool blocks

configure_optimizers()[source]

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.

Returns

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • Tuple of dictionaries as described above, with an optional "frequency" key.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

# The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self):
    optimizer = Adam(...)
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": ReduceLROnPlateau(optimizer, ...),
            "monitor": "metric_to_track",
            "frequency": "indicates how often the metric is updated"
            # If "monitor" references validation metrics, then "frequency" should be set to a
            # multiple of "trainer.check_val_every_n_epoch".
        },
    }


# In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self):
    optimizer1 = Adam(...)
    optimizer2 = SGD(...)
    scheduler1 = ReduceLROnPlateau(optimizer1, ...)
    scheduler2 = LambdaLR(optimizer2, ...)
    return (
        {
            "optimizer": optimizer1,
            "lr_scheduler": {
                "scheduler": scheduler1,
                "monitor": "metric_to_track",
            },
        },
        {"optimizer": optimizer2, "lr_scheduler": scheduler2},
    )

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

The frequency value specified in a dict along with the optimizer key is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1:

  • In the former case, all optimizers will operate on the given batch in each optimization step.

  • In the latter, only one optimizer will operate on the given batch at every step.

This is different from the frequency value specified in the lr_scheduler_config mentioned above.

def configure_optimizers(self):
    optimizer_one = torch.optim.SGD(self.model.parameters(), lr=0.01)
    optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01)
    return [
        {"optimizer": optimizer_one, "frequency": 5},
        {"optimizer": optimizer_two, "frequency": 10},
    ]

In this example, the first optimizer will be used for the first 5 steps, the second optimizer for the next 10 steps and that cycle will continue. If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above dict, the scheduler will only be updated when its optimizer is being used.

Examples:

# most cases. no learning rate scheduler
def configure_optimizers(self):
    return Adam(self.parameters(), lr=1e-3)

# multiple optimizer case (e.g.: GAN)
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    return gen_opt, dis_opt

# example with learning rate schedulers
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    dis_sch = CosineAnnealing(dis_opt, T_max=10)
    return [gen_opt, dis_opt], [dis_sch]

# example with step-based learning rate schedulers
# each optimizer has its own scheduler
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    gen_sch = {
        'scheduler': ExponentialLR(gen_opt, 0.99),
        'interval': 'step'  # called after each training step
    }
    dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
    return [gen_opt, dis_opt], [gen_sch, dis_sch]

# example with optimizer frequencies
# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1
# https://arxiv.org/abs/1704.00028
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    n_critic = 5
    return (
        {'optimizer': dis_opt, 'frequency': n_critic},
        {'optimizer': gen_opt, 'frequency': 1}
    )

Note

Some things to know:

  • Lightning calls .backward() and .step() on each optimizer as needed.

  • If learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizers.

  • If you use multiple optimizers, training_step() will have an additional optimizer_idx parameter.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.

  • If you need to control how often those optimizers step or override the default .step() schedule, override the optimizer_step() hook.

forward(x, return_logits=False)[source]

Same as torch.nn.Module.forward().

Parameters
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns

Your model’s output

predict_on_dataset(dataset, device=0, num_workers=1, batch_size=512)[source]
train_on_dataset(train_dataset, val_dataset, device=0, batch_size=512, num_workers=1, save_dir='.', max_epochs=10, weights=None)[source]
training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters
Returns

Any of.

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'

  • None - Training will skip to the next batch. This is only for automatic optimization.

    This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

If you define multiple optimizers, this step will be called with an additional optimizer_idx parameter.

# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...

If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.

# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}

Note

The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

validation_epoch_end(output)[source]

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters

outputs – List of outputs you defined in validation_step(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.

Returns

None

Note

If you didn’t define a validation_step(), this won’t be called.

Examples

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log("final_metric", final_value)
validation_step(batch, batch_idx)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple val dataloaders used)

Returns

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

class reglm.regression.MultiTaskEnformerModel(model1, model2, model3=None, mean=False, specificity=None)[source]

Bases: torch.nn.modules.module.Module

Combine multiple single-task enformer models into a single object.

Parameters
  • models (list) – List of multiple EnformerModel objects

  • device (int) – GPU index

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict_on_dataset(ds, **kwargs)[source]
class reglm.regression.SeqDataset(seqs, seq_len=None)[source]

Bases: torch.utils.data.dataset.Dataset

PyTorch dataset class for training enformer-based regression models

Parameters
  • seqs (list, pd.DataFrame) – either a list of DNA sequences, or a dataframe whose first column is DNA sequences and remaining columns are labels.

  • seq_len (int) – Length of sequences to return. Sequences will be padded with Ns on the right to reach this length.

reglm.utils module

reglm.utils.get_label_tokens(values, percentiles)[source]

Return labels for sequences given cutoff percentiles

Parameters
  • values (list) – Values for which to calculate percentiles

  • percentiles (list) – Percentiles at which to split values

Returns

list containing label token corresponding to each value

reglm.utils.get_percentiles(values, n_bins=None, qlist=None)[source]

Return list of tokens for sequences by binning their associated values

Parameters
  • values (list) – Values for which to calculate percentiles

  • n_bins (int) – Number of equal bins into which to split values

  • qlist (list) – Quantiles to split values into

Returns

List containing percentiles at which to split the values

reglm.utils.matrix_to_scores(matrix, seqs)[source]

Convert a tensor of shape N x seq_len 4 to a 2-D array of shape N, seq_len containing scores for the actual bases in each sequence

Parameters
  • matrix (torch.Tensor) – An tensor of shape N x seq_len x 4

  • seqs (list) – List of DNA sequences of length N

Returns

array of shape N x seq_len, which will contain

the values in matrix that correspond to the real bases in seqs.

Return type

scores (np.array)

reglm.utils.scores_to_matrix(scores, seqs)[source]

Convert per-base scores to a N x seq_len x 4 numpy array

Parameters
  • scores (torch.Tensor) – tensor of shape N x seq_len

  • seqs (list) – List of DNA sequences of length N

Returns

An array of shape N x seq_len x 4, in

which the entries corresponding to each base in seqs will be filled with the values in scores, and other entries will be 0.

Return type

matrix (np.array)

reglm.utils.seqs_to_idxs(seqs)[source]

Convert DNA sequences to indices

Parameters

seqs (list) – List of sequences to convert into indices

Returns

np.array of shape (len(seqs), seq_len) containing the sequences as indices

reglm.utils.tokenize(df, cols, names, n_bins=None, qlist=None, percentiles=None)[source]

Create labels for sequences by dividing them into bins

Parameters
  • df (pd.DataFrame) – Dataframe containing label values

  • cols (list) – Names of columns to tokenize

  • names (list) – Names to use for the returned tokens

  • n_bins (int) – Number of equal bins into which to split values

  • qlist (list) – Quantiles to split values into

  • percentiles (dict) – Dictionary containing columns from cols as keys, and lists of percentile values.

Returns

Original dataframe with additional columns containing tokenized labels

Return type

df (pd.DataFrame)

Module contents