simulation.utils.machine_learning.data.base_dataset module

This module implements an abstract base class (ABC) ‘BaseDataset’ for datasets.

It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.

Summary

Classes:

BaseDataset

This is the base class for other datasets.

Functions:

get_params

param preprocess

Scaling and cropping of images at load time

get_transform

Create transformation from arguments.

Reference

class BaseDataset(transform_properties: Dict[str, Any] = <factory>)[source]

Bases: Generic[torch.utils.data.dataset.T_co]

This is the base class for other datasets.

transform_properties: Dict[str, Any]

Properties passed as arguments to transform generation function.

property transform

Transformation that can be applied to images.

Type

transforms.Compose

get_params(preprocess: Iterable, load_size: int, crop_size: int, size: Tuple[int, int]) → Dict[str, Any][source]
Parameters
  • preprocess – Scaling and cropping of images at load time [resize | crop | scale_width]

  • load_size – Scale images to this size

  • crop_size – Then crop to this size

  • size – The image sizes

get_transform(load_size: int = - 1, crop_size: int = - 1, mask: Optional[str] = None, preprocess: Iterable = {}, no_flip: bool = True, params=None, grayscale=False, method=3, convert=True) → torchvision.transforms.transforms.Compose[source]

Create transformation from arguments.

Parameters
  • load_size – Scale images to this size

  • crop_size – Then crop to this size

  • mask – Path to a mask overlaid over all images

  • preprocess – scaling and cropping of images at load time [resize | crop | scale_width]

  • no_flip – Flip 50% of all training images vertically

  • params – more params for cropping

  • grayscale – enable or disable grayscale

  • method – the transform method

  • convert – enable or disable transformations and normalizations

__make_power_2(img, base, method=3)[source]
Parameters
  • img – image to transform

  • base – the base

  • method – the transform method

__scale_width(img, target_size, crop_size, method=3)[source]
Parameters
  • img – image to transform

  • target_size – the load size

  • crop_size – the crop size, which is used for training

  • method – the transform method

__crop(img, pos, size)[source]
Parameters
  • img – image to transform

  • pos – where to crop my image

  • size – resulting size of cropped image

__apply_mask(img: PIL.Image.Image, mask_file: str) → PIL.Image.Image[source]

Overlay image with the provided mask.

Parameters
  • img (Image.Image) – image to transform

  • mask_file (str) – path to mask image file

__flip(img, flip)[source]
__print_size_warning(ow, oh, w, h)[source]

Print warning information about image size(only print once)

Parameters
  • ow – original width

  • oh – original height

  • w – width

  • h – height