paraphernalia.torch package

Modules

paraphernalia.torch.clip

Evaluate images with CLIP.

paraphernalia.torch.dall_e

Generate images with the discrete VAE component of DALL-E.

paraphernalia.torch.direct

Generate images "directly" i.e. without a latent space.

paraphernalia.torch.generator

Base class and utility types for image generators.

paraphernalia.torch.lightning

Tools for working with PyTorch Lightning.

paraphernalia.torch.modules

A collection of utility PyTorch modules.

paraphernalia.torch.noise

Noise generating functions.

paraphernalia.torch.siren

Generate images using a network with sin activation functions.

paraphernalia.torch.taming

Generate images with Taming Transformers.

Package contents

Utilities for working with PyTorch.

grid(*steps)[source]

Generate a tensor of co-ordinates in the origin-centred hypercube of the specified dimension.

Example:

>>> grid(2)
tensor([[-1.],
        [ 1.]])
>>> grid(2, 3)
tensor([[[-1., -1.],
         [-1.,  0.],
         [-1.,  1.]],
        [[ 1., -1.],
         [ 1.,  0.],
         [ 1.,  1.]]])
Parameters

steps (int) – number of steps per dimension

Returns

A (rank len(steps) + 1) tensor of the coordinates. The co-ordinates themselves are in dimension -1.

Return type

torch.Tensor

tile(img, size)[source]

Tile img with squares of side size.

Any cut off at the edge is ignored. TODO: Remove

Parameters
  • img (torch.Tensor) –

  • size (int) –

Return type

torch.Tensor

overtile(img, tile_size, overlap=0.5)[source]

TODO: Rename Generate an overlapping tiling that covers img.

Parameters
  • img (torch.Tensor) – An image tensor (b, c, h, w)

  • tile_size (Union[int, Tuple[int, int]]) – The size of the tile, either a single int or a pair of them

  • overlap (float) – The minimum overlap as a fraction of tile size. Defaults to 0.5, where two tiles cover every pixel except at the edges.

Returns

A list of image batches of size tile_size covering img

Return type

List[Tensor]

regroup(img)[source]

Concatenate several image batches, regrouping them so that a single image is contiguous in the resulting batch.

TODO: Is this part of torch under a different name?

Parameters

img (List[Tensor]) – a list of identically shaped image batches

Returns

a concatenation into a single image batch grouped

so that each image in the source batches forms a contiguous block in the new batch

Return type

Tensor

cosine_similarity(a, b)[source]

Compute the cosine similarity tensor.

TODO: Explain restrictions

Parameters
  • a (Tensor) – (A, N) tensor

  • b (Tensor) – (B, N) tensor

Returns

(A, B) tensor of similarities

Return type

[Tensor]

make_palette_grid(colors, size=128)[source]

Create an image to preview a set colours, provided as an iterable of RGB tuples with each component in [0,1].

one_hot_noise(shape)[source]

Generate a one-hot-encoded latent state suitable for use with a categorical variational decoder. This is a hard one-hot tensor. Use one_hot_normalize() if you want to soften.

Parameters

shape (Tuple) – desired shape (batch_size, num_classes, height, width)

Returns

one hot Tensor of dimension (batch_size, num_classes, height, width)

Return type

Tensor

one_hot_constant(shape, index)[source]

Generate a latent state using a constant value.

Parameters

shape (Tuple) – desired shape (batch_size, num_classes, height, width)

Returns

one hot Tensor of dimension (batch_size, num_classes, height, width)

Return type

Tensor

one_hot_normalize(z, tau=0.001)[source]

Normalized a log probability/one hot tensor, by locking in modes then converting a slightly noisy log probability.

Parameters

shape (Tuple) – desired shape (batch_size, num_classes, height, width)

Returns

one hot Tensor of dimension (batch_size, num_classes, height, width)

Return type

Tensor

class ReplaceGrad(*args, **kwargs)[source]

Replace one function’s gradient with another’s.

FIXME: I don’t think this works.

static forward(ctx, x_forward, x_backward)[source]

Replace the backward call.

static backward(ctx, grad_in)[source]

Compute the gradient at this function.

replace_grad()
class ClampWithGrad(*args, **kwargs)[source]

Clamp an output but pass through gradients.

FIXME: Not sure about this. Would a “leaky” clamp be better?

static forward(ctx, input, min, max)[source]

Clamp input between min and max

Parameters
  • input (torch.Tensor) –

  • min (float) –

  • max (float) –

static backward(ctx, grad_in)[source]

Compute the gradient at this function.

clamp_with_grad()
free(device=None)[source]

Compute free memory on the specified device.

Parameters

device ([type], optional) – the device to query

Returns

(total, used, free) in bytes

Return type

Tuple

gc()[source]

Trigger a Python/PyTorch garbage collection.

make_random_resized_crop(src_size, dest_size, scale=(0.08, 1.0), interpolation=2)[source]

Returns a RandomResizedCrop transformation that will produce crops of dest_size from images of src_size with scale in the range indicated.

Sizes are both torchvision style (h, w).