simulation.utils.machine_learning.cycle_gan.configs package

Submodules

simulation.utils.machine_learning.cycle_gan.configs.base_options module

Classes:

BaseOptions()

class BaseOptions[source]

Bases: object

Attributes:

activation(*input, **kwargs)

Choose which activation to use.

checkpoints_dir

models are saved here

conv_layers_in_block

specify number of convolution layers per resnet block

crop_size

then crop to this size

dilations

dilation for individual conv layers in every resnet block

epoch

which epoch to load? set to latest to use latest cached model

init_gain

scaling factor for normal, xavier and orthogonal.

init_type

network initialization [normal | xavier | kaiming | orthogonal]

input_nc

3 for RGB and 1 for grayscale

lambda_idt_a

weight for loss identity of domain A

lambda_idt_b

weight for loss identity of domain B

lambda_cycle

weight for cycle loss

load_size

scale images to this size

mask

Path to a mask overlaid over all images

n_layers_d

number of layers in the discriminator network

name

name of the experiment.

ndf

# of discriminator filters in the first conv layer

netd

Specify discriminator architecture.

netg

specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]

ngf

# of gen filters in the last conv layer

no_dropout

no dropout for the generator

norm

instance normalization or batch normalization [instance | batch | none]

output_nc

3 for RGB and 1 for grayscale

preprocess

Scaling and cropping of images at load time.

verbose

if specified, print more debugging information

cycle_noise_stddev

Standard deviation of noise added to the cycle input.

pool_size

the size of image buffer that stores previously generated images

max_dataset_size

maximum amount of images to load; -1 means infinity

is_wgan

Decide whether to use wasserstein cycle gan or standard cycle gan

l1_or_l2_loss

“l1” or “l2”; Decide whether to use l1 or l2 as cycle and identity loss functions

use_sigmoid

Use sigmoid activation at end of discriminator

Methods:

to_dict()

activation(*input, **kwargs): torch.nn.modules.module.Module = Tanh()

Choose which activation to use.

checkpoints_dir: str = './checkpoints'

models are saved here

conv_layers_in_block: int = 3

specify number of convolution layers per resnet block

crop_size: int = 512

then crop to this size

dilations: List[int] = [1, 2, 4]

dilation for individual conv layers in every resnet block

epoch: Union[int, str] = 'latest'

which epoch to load? set to latest to use latest cached model

init_gain: float = 0.02

scaling factor for normal, xavier and orthogonal.

init_type: str = 'normal'

network initialization [normal | xavier | kaiming | orthogonal]

input_nc: int = 1

3 for RGB and 1 for grayscale

Type

# of input image channels

lambda_idt_a: float = 0.5

weight for loss identity of domain A

lambda_idt_b: float = 0.5

weight for loss identity of domain B

lambda_cycle: float = 10

weight for cycle loss

load_size: int = 512

scale images to this size

mask: str = 'resources/mask.png'

Path to a mask overlaid over all images

n_layers_d: int = 4

number of layers in the discriminator network

name: str = 'dr_drift'

name of the experiment. It decides where to store samples and models

ndf: int = 32

# of discriminator filters in the first conv layer

netd: str = 'basic'

Specify discriminator architecture. [basic | n_layers | no_patch]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator.

netg: str = 'resnet_9blocks'

specify generator architecture [resnet_<ANY_INTEGER>blocks | unet_256 | unet_128]

ngf: int = 32

# of gen filters in the last conv layer

no_dropout: bool = True

no dropout for the generator

norm: str = 'instance'

instance normalization or batch normalization [instance | batch | none]

output_nc: int = 1

3 for RGB and 1 for grayscale

Type

of output image channels

preprocess: set = {'crop', 'resize'}

Scaling and cropping of images at load time.

[resize | crop | scale_width]

verbose: bool = False

if specified, print more debugging information

cycle_noise_stddev: float = 0

Standard deviation of noise added to the cycle input. Mean is 0.

pool_size: int = 75

the size of image buffer that stores previously generated images

max_dataset_size: int = 15000

maximum amount of images to load; -1 means infinity

is_wgan: bool = False

Decide whether to use wasserstein cycle gan or standard cycle gan

l1_or_l2_loss: str = 'l1'

“l1” or “l2”; Decide whether to use l1 or l2 as cycle and identity loss functions

use_sigmoid: bool = True

Use sigmoid activation at end of discriminator

classmethod to_dict()dict[source]

simulation.utils.machine_learning.cycle_gan.configs.test_options module

Classes:

TestOptions()

WassersteinCycleGANTestOptions()

CycleGANTestOptions()

class TestOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.base_options.BaseOptions

Attributes:

dataset_a

path to images of domain A (real images).

dataset_b

path to images of domain B (simulated images).

results_dir

saves results here.

aspect_ratio

aspect ratio of result images

is_train

enable or disable training mode

dataset_a: List[str] = ['./../../../../data/real_images/maschinen_halle_parking']

path to images of domain A (real images).

dataset_b: List[str] = ['./../../../../data/simulated_images/test_images']

path to images of domain B (simulated images).

results_dir: str = './results/'

saves results here.

aspect_ratio: float = 1

aspect ratio of result images

is_train: bool = False

enable or disable training mode

class WassersteinCycleGANTestOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.test_options.TestOptions

class CycleGANTestOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.test_options.TestOptions

simulation.utils.machine_learning.cycle_gan.configs.train_options module

Classes:

TrainOptions()

WassersteinCycleGANTrainOptions()

CycleGANTrainOptions()

class TrainOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.base_options.BaseOptions

Attributes:

dataset_a

Path to images of domain A (real images).

dataset_b

Path to images of domain B (simulated images).

display_id

Window id of the web display

display_port

Visdom port of the web display

is_train

Enable or disable training mode

num_threads

# threads for loading data

save_freq

Frequency of saving the current models

print_freq

Frequency of showing training results on console

beta1

Momentum term of adam

batch_size

Input batch size

lr

Initial learning rate for adam

lr_decay_iters

Multiply by a gamma every lr_decay_iters iterations

lr_policy

Learning rate policy.

lr_step_factor

Multiplication factor at every step in the step scheduler

n_epochs

Number of epochs with the initial learning rate

n_epochs_decay

Number of epochs to linearly decay learning rate to zero

no_flip

Flip 50% of all training images vertically

continue_train

Load checkpoints or start from scratch

dataset_a: List[str] = ['./../../../../data/real_images/beg_2019']

Path to images of domain A (real images). Can be a list of folders.

dataset_b: List[str] = ['./../../../../data/simulated_images/random_roads']

Path to images of domain B (simulated images). Can be a list of folders

display_id: int = 1

Window id of the web display

display_port: int = 8097

Visdom port of the web display

is_train: bool = True

Enable or disable training mode

num_threads: int = 8

# threads for loading data

save_freq: int = 100

Frequency of saving the current models

print_freq: int = 5

Frequency of showing training results on console

beta1: float = 0.5

Momentum term of adam

batch_size: int = 3

Input batch size

lr: float = 0.0005

Initial learning rate for adam

lr_decay_iters: int = 1

Multiply by a gamma every lr_decay_iters iterations

lr_policy: str = 'step'

Learning rate policy. [linear | step | plateau | cosine]

lr_step_factor: float = 0.1

Multiplication factor at every step in the step scheduler

n_epochs: int = 0

Number of epochs with the initial learning rate

n_epochs_decay: int = 10

Number of epochs to linearly decay learning rate to zero

no_flip: bool = False

Flip 50% of all training images vertically

continue_train: bool = False

Load checkpoints or start from scratch

class WassersteinCycleGANTrainOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.train_options.TrainOptions

Attributes:

wgan_initial_n_critic

Number of iterations of the critic before starting training loop

wgan_clip_upper

Upper bound for weight clipping

wgan_clip_lower

Lower bound for weight clipping

wgan_n_critic

Number of iterations of the critic per generator iteration

is_wgan

Decide whether to use wasserstein cycle gan or standard cycle gan

wgan_initial_n_critic: int = 1

Number of iterations of the critic before starting training loop

wgan_clip_upper: float = 0.001

Upper bound for weight clipping

wgan_clip_lower: float = -0.001

Lower bound for weight clipping

wgan_n_critic: int = 5

Number of iterations of the critic per generator iteration

is_wgan: bool = True

Decide whether to use wasserstein cycle gan or standard cycle gan

class CycleGANTrainOptions[source]

Bases: simulation.utils.machine_learning.cycle_gan.configs.train_options.TrainOptions

Module contents