simulation.utils.machine_learning.models.helper module

Summary

Functions:

get_norm_layer

Return a normalization layer.

get_scheduler

Return a learning rate scheduler.

init_net

Initialize a network.

init_weights

Initialize network weights.

set_requires_grad

Set requires_grad=False for all the networks to avoid unnecessary computations.

Reference

get_norm_layer(norm_type: str = 'instance') → Type[torch.nn.modules.module.Module][source]

Return a normalization layer.

For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.

Parameters

norm_type – Name of the normalization layer: batch | instance | none

get_scheduler(optimizer: torch.optim.optimizer.Optimizer, lr_policy: str, lr_decay_iters: int, n_epochs: int, lr_step_factor: float) → Union[torch.optim.lr_scheduler.LambdaLR, torch.optim.lr_scheduler.StepLR, torch.optim.lr_scheduler.ReduceLROnPlateau][source]

Return a learning rate scheduler.

For ‘linear’, we keep the same learning rate for the first <n_epochs> epochs and linearly decay the rate to zero over the next <n_epochs_decay> epochs. For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. See https://pytorch.org/docs/stable/optim.html for more details.

Parameters
  • optimizer – Optimizer of the network’s parameters

  • lr_policy – Learning rate policy. [linear | step | plateau | cosine]

  • lr_decay_iters – Multiply by a gamma every lr_decay_iters iterations

  • n_epochs – Number of epochs with the initial learning rate

  • lr_step_factor – Multiplication factor at every step in the step scheduler

init_weights(net: torch.nn.modules.module.Module, init_type: str = 'normal', init_gain: float = 0.02) → None[source]

Initialize network weights.

We use ‘normal’ in the original pix2pix and CycleGAN paper. But xavier and kaiming might work better for some applications. Feel free to try yourself.

Parameters
  • net – Network to be initialized

  • init_type – Name of an initialization method: normal | xavier | kaiming | orthogonal

  • init_gain – Scaling factor for normal, xavier and orthogonal.

init_net(net: torch.nn.modules.module.Module, init_type: str = 'normal', init_gain: float = 0.02, device: torch.device = device(type='cpu')) → torch.nn.modules.module.Module[source]

Initialize a network.

  1. register CPU/GPU device;

  2. initialize the network weights

Return an initialized network.

Parameters
  • net – Network to be initialized

  • init_type – Name of an initialization method: normal | xavier | kaiming | orthogonal

  • init_gain – Scaling factor for normal, xavier and orthogonal.

  • device – Device to the net run

set_requires_grad(nets: Union[List[torch.nn.modules.module.Module], torch.nn.modules.module.Module], requires_grad: bool = False)[source]

Set requires_grad=False for all the networks to avoid unnecessary computations.

Parameters
  • nets – A single network or a list of networks

  • requires_grad – Enable or disable grads