simulation.utils.machine_learning.cycle_gan.configs package¶
Submodules¶
simulation.utils.machine_learning.cycle_gan.configs.base_options module¶
Classes:
- class BaseOptions[source]¶
Bases:
object
Attributes:
activation
(*input, **kwargs)Choose which activation to use.
models are saved here
specify number of convolution layers per resnet block
then crop to this size
dilation for individual conv layers in every resnet block
which epoch to load? set to latest to use latest cached model
scaling factor for normal, xavier and orthogonal.
network initialization [normal | xavier | kaiming | orthogonal]
3 for RGB and 1 for grayscale
weight for loss identity of domain A
weight for loss identity of domain B
weight for cycle loss
scale images to this size
Path to a mask overlaid over all images
number of layers in the discriminator network
name of the experiment.
# of discriminator filters in the first conv layer
Specify discriminator architecture.
specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]
# of gen filters in the last conv layer
no dropout for the generator
instance normalization or batch normalization [instance | batch | none]
3 for RGB and 1 for grayscale
Scaling and cropping of images at load time.
if specified, print more debugging information
Standard deviation of noise added to the cycle input.
the size of image buffer that stores previously generated images
maximum amount of images to load; -1 means infinity
Decide whether to use wasserstein cycle gan or standard cycle gan
“l1” or “l2”; Decide whether to use l1 or l2 as cycle and identity loss functions
Use sigmoid activation at end of discriminator
Methods:
to_dict
()- activation(*input, **kwargs): torch.nn.modules.module.Module = Tanh()¶
Choose which activation to use.
- checkpoints_dir: str = './checkpoints'¶
models are saved here
- conv_layers_in_block: int = 3¶
specify number of convolution layers per resnet block
- crop_size: int = 512¶
then crop to this size
- dilations: List[int] = [1, 2, 4]¶
dilation for individual conv layers in every resnet block
- epoch: Union[int, str] = 'latest'¶
which epoch to load? set to latest to use latest cached model
- init_gain: float = 0.02¶
scaling factor for normal, xavier and orthogonal.
- init_type: str = 'normal'¶
network initialization [normal | xavier | kaiming | orthogonal]
- input_nc: int = 1¶
3 for RGB and 1 for grayscale
- Type
# of input image channels
- lambda_idt_a: float = 0.5¶
weight for loss identity of domain A
- lambda_idt_b: float = 0.5¶
weight for loss identity of domain B
- lambda_cycle: float = 10¶
weight for cycle loss
- load_size: int = 512¶
scale images to this size
- mask: str = 'resources/mask.png'¶
Path to a mask overlaid over all images
- n_layers_d: int = 4¶
number of layers in the discriminator network
- name: str = 'dr_drift'¶
name of the experiment. It decides where to store samples and models
- ndf: int = 32¶
# of discriminator filters in the first conv layer
- netd: str = 'basic'¶
Specify discriminator architecture. [basic | n_layers | no_patch]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator.
- netg: str = 'resnet_9blocks'¶
specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]
- ngf: int = 32¶
# of gen filters in the last conv layer
- no_dropout: bool = True¶
no dropout for the generator
- norm: str = 'instance'¶
instance normalization or batch normalization [instance | batch | none]
- output_nc: int = 1¶
3 for RGB and 1 for grayscale
- Type
of output image channels
- preprocess: set = {'crop', 'resize'}¶
Scaling and cropping of images at load time.
[resize | crop | scale_width]
- verbose: bool = False¶
if specified, print more debugging information
- cycle_noise_stddev: float = 0¶
Standard deviation of noise added to the cycle input. Mean is 0.
- pool_size: int = 75¶
the size of image buffer that stores previously generated images
- max_dataset_size: int = 15000¶
maximum amount of images to load; -1 means infinity
- is_wgan: bool = False¶
Decide whether to use wasserstein cycle gan or standard cycle gan
- l1_or_l2_loss: str = 'l1'¶
“l1” or “l2”; Decide whether to use l1 or l2 as cycle and identity loss functions
- use_sigmoid: bool = True¶
Use sigmoid activation at end of discriminator
simulation.utils.machine_learning.cycle_gan.configs.test_options module¶
Classes:
- class TestOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.base_options.BaseOptions
Attributes:
path to images of domain A (real images).
path to images of domain B (simulated images).
saves results here.
aspect ratio of result images
enable or disable training mode
- dataset_a: List[str] = ['./../../../../data/real_images/maschinen_halle_parking']¶
path to images of domain A (real images).
- dataset_b: List[str] = ['./../../../../data/simulated_images/test_images']¶
path to images of domain B (simulated images).
- results_dir: str = './results/'¶
saves results here.
- aspect_ratio: float = 1¶
aspect ratio of result images
- is_train: bool = False¶
enable or disable training mode
- class WassersteinCycleGANTestOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.test_options.TestOptions
- class CycleGANTestOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.test_options.TestOptions
simulation.utils.machine_learning.cycle_gan.configs.train_options module¶
Classes:
- class TrainOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.base_options.BaseOptions
Attributes:
Path to images of domain A (real images).
Path to images of domain B (simulated images).
Window id of the web display
Visdom port of the web display
Enable or disable training mode
# threads for loading data
Frequency of saving the current models
Frequency of showing training results on console
Momentum term of adam
Input batch size
Initial learning rate for adam
Multiply by a gamma every lr_decay_iters iterations
Learning rate policy.
Multiplication factor at every step in the step scheduler
Number of epochs with the initial learning rate
Number of epochs to linearly decay learning rate to zero
Flip 50% of all training images vertically
Load checkpoints or start from scratch
- dataset_a: List[str] = ['./../../../../data/real_images/beg_2019']¶
Path to images of domain A (real images). Can be a list of folders.
- dataset_b: List[str] = ['./../../../../data/simulated_images/random_roads']¶
Path to images of domain B (simulated images). Can be a list of folders
- display_id: int = 1¶
Window id of the web display
- display_port: int = 8097¶
Visdom port of the web display
- is_train: bool = True¶
Enable or disable training mode
- num_threads: int = 8¶
# threads for loading data
- save_freq: int = 100¶
Frequency of saving the current models
- print_freq: int = 5¶
Frequency of showing training results on console
- beta1: float = 0.5¶
Momentum term of adam
- batch_size: int = 3¶
Input batch size
- lr: float = 0.0005¶
Initial learning rate for adam
- lr_decay_iters: int = 1¶
Multiply by a gamma every lr_decay_iters iterations
- lr_policy: str = 'step'¶
Learning rate policy. [linear | step | plateau | cosine]
- lr_step_factor: float = 0.1¶
Multiplication factor at every step in the step scheduler
- n_epochs: int = 0¶
Number of epochs with the initial learning rate
- n_epochs_decay: int = 10¶
Number of epochs to linearly decay learning rate to zero
- no_flip: bool = False¶
Flip 50% of all training images vertically
- continue_train: bool = False¶
Load checkpoints or start from scratch
- class WassersteinCycleGANTrainOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.train_options.TrainOptions
Attributes:
Number of iterations of the critic before starting training loop
Upper bound for weight clipping
Lower bound for weight clipping
Number of iterations of the critic per generator iteration
Decide whether to use wasserstein cycle gan or standard cycle gan
- wgan_initial_n_critic: int = 1¶
Number of iterations of the critic before starting training loop
- wgan_clip_upper: float = 0.001¶
Upper bound for weight clipping
- wgan_clip_lower: float = -0.001¶
Lower bound for weight clipping
- wgan_n_critic: int = 5¶
Number of iterations of the critic per generator iteration
- is_wgan: bool = True¶
Decide whether to use wasserstein cycle gan or standard cycle gan
- class CycleGANTrainOptions[source]¶
Bases:
simulation.utils.machine_learning.cycle_gan.configs.train_options.TrainOptions