simulation.utils.machine_learning.models package¶
Subpackages¶
- simulation.utils.machine_learning.models.test package
- Submodules
- simulation.utils.machine_learning.models.test.test_helper module
- simulation.utils.machine_learning.models.test.test_resnet_block module
- simulation.utils.machine_learning.models.test.test_resnet_generator module
- simulation.utils.machine_learning.models.test.test_wgan_critic module
- Module contents
Submodules¶
simulation.utils.machine_learning.models.helper module¶
Functions:
|
Return a normalization layer. |
|
Return a learning rate scheduler. |
|
Initialize network weights. |
|
Initialize a network. |
|
Set requires_grad=False for all the networks to avoid unnecessary computations. |
- get_norm_layer(norm_type: str = 'instance') → Type[torch.nn.modules.module.Module][source]¶
Return a normalization layer.
For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.
- Parameters
norm_type – Name of the normalization layer: batch | instance | none
- get_scheduler(optimizer: torch.optim.optimizer.Optimizer, lr_policy: str, lr_decay_iters: int, n_epochs: int, lr_step_factor: float) → Union[torch.optim.lr_scheduler.LambdaLR, torch.optim.lr_scheduler.StepLR, torch.optim.lr_scheduler.ReduceLROnPlateau][source]¶
Return a learning rate scheduler.
For ‘linear’, we keep the same learning rate for the first <n_epochs> epochs and linearly decay the rate to zero over the next <n_epochs_decay> epochs. For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. See https://pytorch.org/docs/stable/optim.html for more details.
- Parameters
optimizer – Optimizer of the network’s parameters
lr_policy – Learning rate policy. [linear | step | plateau | cosine]
lr_decay_iters – Multiply by a gamma every lr_decay_iters iterations
n_epochs – Number of epochs with the initial learning rate
lr_step_factor – Multiplication factor at every step in the step scheduler
- init_weights(net: torch.nn.modules.module.Module, init_type: str = 'normal', init_gain: float = 0.02) → None[source]¶
Initialize network weights.
We use ‘normal’ in the original pix2pix and CycleGAN paper. But xavier and kaiming might work better for some applications. Feel free to try yourself.
- Parameters
net – Network to be initialized
init_type – Name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain – Scaling factor for normal, xavier and orthogonal.
- init_net(net: torch.nn.modules.module.Module, init_type: str = 'normal', init_gain: float = 0.02, device: torch.device = device(type='cuda', index=0)) → torch.nn.modules.module.Module[source]¶
Initialize a network.
register CPU/GPU device;
initialize the network weights
Return an initialized network.
- Parameters
net – Network to be initialized
init_type – Name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain – Scaling factor for normal, xavier and orthogonal.
device – Device to the net run
- set_requires_grad(nets: Union[List[torch.nn.modules.module.Module], torch.nn.modules.module.Module], requires_grad: bool = False)[source]¶
Set requires_grad=False for all the networks to avoid unnecessary computations.
- Parameters
nets – A single network or a list of networks
requires_grad – Enable or disable grads
simulation.utils.machine_learning.models.resnet_block module¶
Classes:
|
Define a Resnet block. |
- class ResnetBlock(dim: int, padding_type: str, norm_layer: Type[torch.nn.modules.module.Module], use_dropout: bool, use_bias: bool, n_conv_layers: int = 2, dilations: Optional[List[int]] = None)[source]¶
Bases:
torch.nn.modules.module.Module
Define a Resnet block.
Methods:
forward
(x)Standard forward with skip connection.
Attributes:
- forward(x: torch.Tensor) → torch.Tensor[source]¶
Standard forward with skip connection.
- Parameters
x – Input tensor
- training: bool¶
- _is_full_backward_hook: Optional[bool]¶
simulation.utils.machine_learning.models.resnet_generator module¶
Classes:
|
Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations. |
- class ResnetGenerator(input_nc: int, output_nc: int, ngf: int = 64, norm_layer: Type[torch.nn.modules.module.Module] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_dropout: bool = False, n_blocks: int = 6, padding_type: str = 'reflect', activation: torch.nn.modules.module.Module = Tanh(), conv_layers_in_block: int = 2, dilations: Optional[List[int]] = None)[source]¶
Bases:
torch.nn.modules.module.Module
,simulation.utils.basics.init_options.InitOptions
Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
We adapt Torch code and idea from Justin Johnson’s neural style transfer project( https://github.com/jcjohnson/fast-neural-style)
Methods:
forward
(input)Standard forward.
Attributes:
- forward(input: torch.Tensor) → torch.Tensor[source]¶
Standard forward.
- Parameters
input – The input tensor
- training: bool¶
- _is_full_backward_hook: Optional[bool]¶
simulation.utils.machine_learning.models.unet_block module¶
Classes:
|
Defines the Unet submodule with skip connection. |
- class UnetSkipConnectionBlock(outer_nc: int, inner_nc: int, input_nc: Optional[int] = None, submodule: Optional[torch.nn.modules.module.Module] = None, outermost: bool = False, innermost: bool = False, norm_layer: torch.nn.modules.module.Module = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_dropout: bool = False)[source]¶
Bases:
torch.nn.modules.module.Module
Defines the Unet submodule with skip connection. X.
——————-identity———————- |-- downsampling -- |submodule| – upsampling –|
Methods:
forward
(x)Forward with skip connection, if this is not the outermost.
Attributes:
- forward(x: torch.Tensor) → torch.Tensor[source]¶
Forward with skip connection, if this is not the outermost.
- Parameters
x (torch.Tensor) – the input tensor
- training: bool¶
- _is_full_backward_hook: Optional[bool]¶
simulation.utils.machine_learning.models.unet_generator module¶
Classes:
|
Create a Unet-based generator. |
- class UnetGenerator(input_nc: int, output_nc: int, num_downs: int, ngf: int = 64, norm_layer: torch.nn.modules.module.Module = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, use_dropout: bool = False)[source]¶
Bases:
torch.nn.modules.module.Module
Create a Unet-based generator.
Methods:
forward
(input)Standard forward.
Attributes:
- forward(input: torch.Tensor) → torch.Tensor[source]¶
Standard forward.
- Parameters
input (Tensor) – the input tensor
- training: bool¶
- _is_full_backward_hook: Optional[bool]¶
simulation.utils.machine_learning.models.wasserstein_critic module¶
Classes:
|
- class WassersteinCritic(input_nc: int, n_blocks: int = 3, norm: str = 'instance', ndf=32, height=256, width=256, use_dropout: bool = False, padding_type: str = 'reflect', conv_layers_in_block: int = 2, dilations: Optional[List[int]] = None)[source]¶
Bases:
torch.nn.modules.module.Module
,simulation.utils.basics.init_options.InitOptions
Methods:
forward
(input)Defines the computation performed at every call.
_clip_weights
([bounds])Clip weights to given bounds.
perform_optimization_step
(generator, …[, …])Do one iteration to update the parameters.
Attributes:
- forward(input: torch.Tensor)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- perform_optimization_step(generator: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, batch_critic: torch.Tensor, batch_generator: torch.Tensor, weight_clips: Optional[Tuple[float, float]] = None) → float[source]¶
Do one iteration to update the parameters.
- Parameters
generator – Generation network
optimizer – Optimizer for the critic’s weights
batch_critic – A batch of inputs for the critic
batch_generator – A batch of inputs for the generator
weight_clips – Optional weight bounds for the critic’s weights
- Returns
Current wasserstein distance estimated by critic.
- training: bool¶
- _is_full_backward_hook: Optional[bool]¶