simulation.utils.machine_learning.cycle_gan.models.generator module

Summary

Functions:

create_generator

Create a generator.

Reference

create_generator(input_nc: int, output_nc: int, ngf: int, netg: str, norm: str = 'batch', use_dropout: bool = False, activation: torch.nn.modules.module.Module = Tanh(), conv_layers_in_block: int = 2, dilations: Optional[List[int]] = None) → torch.nn.modules.module.Module[source]

Create a generator.

Returns a generator

Our current implementation provides two types of generators.

U-Net:

[unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images) The original U-Net paper: https://arxiv.org/abs/1505.04597

Resnet-based generator:

[resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks) Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations. We adapt Torch code from Justin Johnson’s neural style transfer project (https://github.com/jcjohnson/fast-neural-style).

It uses RELU for non-linearity.

Parameters
  • input_nc (int) – # of input image channels: 3 for RGB and 1 for grayscale

  • output_nc (int) – # of output image channels: 3 for RGB and 1 for grayscale

  • ngf (int) – # of gen filters in the last conv layer

  • netg (str) – specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]

  • norm (str) – instance normalization or batch normalization [instance | batch | none]

  • use_dropout (bool) – enable or disable dropout

  • activation (nn.Module) – Choose which activation to use.

  • conv_layers_in_block (int) – specify number of convolution layers per resnet block

  • dilations – dilation for individual conv layers in every resnet block