simulation.utils.machine_learning.data.visualizer module

Summary

Classes:

Visualizer

This class includes several functions that can display/save images and print/save logging information.

Reference

class Visualizer(display_id: int = 1, name: str = 'kitcar', display_port: int = 8097, checkpoints_dir: str = './checkpoints')[source]

Bases: object

This class includes several functions that can display/save images and print/save logging information.

It uses a Python library ‘visdom’ for display.

static create_visdom_connections(port: int) → None[source]

If the program could not connect to Visdom server, this function will start a new server at port <self.port>

show_hyperparameters(hyperparameters: Dict[str, Any])[source]

Create a html table with all parameters from the dict and displays it on visdom.

Parameters

hyperparameters – a dict containing all hyperparameters

display_current_results(visuals: Dict[str, torch.Tensor], images_per_row: int = 4)[source]

Display current results on visdom.

Parameters
  • visuals – dictionary of images to display or save

  • images_per_row – Amount of images per row

plot_current_losses(epoch: int, counter_ratio: float, losses: dict) → None[source]

display the current losses on visdom display: dictionary of error labels and values.

Parameters
  • epoch – current epoch

  • counter_ratio – progress (percentage) in the current epoch, between 0 to 1

  • losses – training losses stored in the format of (name, float) pairs

save_losses_as_image(path: str)[source]

Save the tracked losses as png file.

Parameters

path – The path where the loss image should be stored

print_current_losses(epoch: int, iters: int, losses: dict, t_comp: float, estimated_time: float) → None[source]

print current losses on console; also save the losses to the disk.

Parameters
  • epoch (int) – current epoch

  • iters (int) – current training iteration during this epoch (reset to 0 at the end of every epoch)

  • losses (dict) – training losses stored in the format of (name, float) pairs

  • t_comp (float) – computational time per data point (normalized by batch_size)

  • estimated_time (float) – the estimated time until training finishes